00:00:00.000 Started by upstream project "autotest-per-patch" build number 132395 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:02.614 The recommended git tool is: git 00:00:02.615 using credential 00000000-0000-0000-0000-000000000002 00:00:02.617 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.632 Fetching changes from the remote Git repository 00:00:02.636 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.649 Using shallow fetch with depth 1 00:00:02.649 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.649 > git --version # timeout=10 00:00:02.660 > git --version # 'git version 2.39.2' 00:00:02.660 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.672 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.672 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.511 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.523 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.537 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.537 > git config core.sparsecheckout # timeout=10 00:00:08.551 > git read-tree -mu HEAD # timeout=10 00:00:08.568 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.596 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.597 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.696 [Pipeline] Start of Pipeline 00:00:08.708 [Pipeline] library 00:00:08.710 Loading library shm_lib@master 00:00:08.710 Library shm_lib@master is cached. Copying from home. 00:00:08.723 [Pipeline] node 00:00:08.734 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.735 [Pipeline] { 00:00:08.743 [Pipeline] catchError 00:00:08.744 [Pipeline] { 00:00:08.754 [Pipeline] wrap 00:00:08.761 [Pipeline] { 00:00:08.770 [Pipeline] stage 00:00:08.772 [Pipeline] { (Prologue) 00:00:08.952 [Pipeline] sh 00:00:09.231 + logger -p user.info -t JENKINS-CI 00:00:09.252 [Pipeline] echo 00:00:09.254 Node: CYP11 00:00:09.262 [Pipeline] sh 00:00:09.556 [Pipeline] setCustomBuildProperty 00:00:09.564 [Pipeline] echo 00:00:09.566 Cleanup processes 00:00:09.569 [Pipeline] sh 00:00:09.849 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.849 3573248 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.861 [Pipeline] sh 00:00:10.143 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.143 ++ grep -v 'sudo pgrep' 00:00:10.143 ++ awk '{print $1}' 00:00:10.143 + sudo kill -9 00:00:10.143 + true 00:00:10.161 [Pipeline] cleanWs 00:00:10.172 [WS-CLEANUP] Deleting project workspace... 00:00:10.172 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.179 [WS-CLEANUP] done 00:00:10.183 [Pipeline] setCustomBuildProperty 00:00:10.199 [Pipeline] sh 00:00:10.477 + sudo git config --global --replace-all safe.directory '*' 00:00:10.543 [Pipeline] httpRequest 00:00:11.217 [Pipeline] echo 00:00:11.219 Sorcerer 10.211.164.20 is alive 00:00:11.228 [Pipeline] retry 00:00:11.230 [Pipeline] { 00:00:11.248 [Pipeline] httpRequest 00:00:11.253 HttpMethod: GET 00:00:11.253 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.254 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.259 Response Code: HTTP/1.1 200 OK 00:00:11.260 Success: Status code 200 is in the accepted range: 200,404 00:00:11.260 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:41.861 [Pipeline] } 00:00:41.879 [Pipeline] // retry 00:00:41.887 [Pipeline] sh 00:00:42.173 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:42.189 [Pipeline] httpRequest 00:00:42.552 [Pipeline] echo 00:00:42.554 Sorcerer 10.211.164.20 is alive 00:00:42.565 [Pipeline] retry 00:00:42.567 [Pipeline] { 00:00:42.583 [Pipeline] httpRequest 00:00:42.589 HttpMethod: GET 00:00:42.589 URL: http://10.211.164.20/packages/spdk_a361eb5e2807baab35986e5161b461bb8015fc19.tar.gz 00:00:42.590 Sending request to url: http://10.211.164.20/packages/spdk_a361eb5e2807baab35986e5161b461bb8015fc19.tar.gz 00:00:42.596 Response Code: HTTP/1.1 200 OK 00:00:42.596 Success: Status code 200 is in the accepted range: 200,404 00:00:42.597 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a361eb5e2807baab35986e5161b461bb8015fc19.tar.gz 00:04:38.406 [Pipeline] } 00:04:38.423 [Pipeline] // retry 00:04:38.431 [Pipeline] sh 00:04:38.717 + tar --no-same-owner -xf spdk_a361eb5e2807baab35986e5161b461bb8015fc19.tar.gz 00:04:41.266 [Pipeline] sh 00:04:41.549 + git -C spdk log --oneline -n5 00:04:41.549 a361eb5e2 nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:04:41.549 4ab755590 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:04:41.549 f40c2e7bb dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT 00:04:41.549 325a79ea3 bdev/malloc: Support accel sequence when DIF is enabled 00:04:41.549 0b4b4be7e bdev: Add spdk_bdev_io_hide_metadata() for bdev modules 00:04:41.559 [Pipeline] } 00:04:41.573 [Pipeline] // stage 00:04:41.582 [Pipeline] stage 00:04:41.585 [Pipeline] { (Prepare) 00:04:41.600 [Pipeline] writeFile 00:04:41.616 [Pipeline] sh 00:04:41.899 + logger -p user.info -t JENKINS-CI 00:04:41.911 [Pipeline] sh 00:04:42.193 + logger -p user.info -t JENKINS-CI 00:04:42.204 [Pipeline] sh 00:04:42.485 + cat autorun-spdk.conf 00:04:42.486 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:42.486 SPDK_TEST_NVMF=1 00:04:42.486 SPDK_TEST_NVME_CLI=1 00:04:42.486 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:42.486 SPDK_TEST_NVMF_NICS=e810 00:04:42.486 SPDK_TEST_VFIOUSER=1 00:04:42.486 SPDK_RUN_UBSAN=1 00:04:42.486 NET_TYPE=phy 00:04:42.493 RUN_NIGHTLY=0 00:04:42.497 [Pipeline] readFile 00:04:42.520 [Pipeline] withEnv 00:04:42.522 [Pipeline] { 00:04:42.534 [Pipeline] sh 00:04:42.910 + set -ex 00:04:42.911 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:42.911 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:42.911 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:42.911 ++ SPDK_TEST_NVMF=1 00:04:42.911 ++ SPDK_TEST_NVME_CLI=1 00:04:42.911 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:42.911 ++ SPDK_TEST_NVMF_NICS=e810 00:04:42.911 ++ SPDK_TEST_VFIOUSER=1 00:04:42.911 ++ SPDK_RUN_UBSAN=1 00:04:42.911 ++ NET_TYPE=phy 00:04:42.911 ++ RUN_NIGHTLY=0 00:04:42.911 + case $SPDK_TEST_NVMF_NICS in 00:04:42.911 + DRIVERS=ice 00:04:42.911 + [[ tcp == \r\d\m\a ]] 00:04:42.911 + [[ -n ice ]] 00:04:42.911 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:42.911 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:42.911 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:42.911 rmmod: ERROR: Module irdma is not currently loaded 00:04:42.911 rmmod: ERROR: Module i40iw is not currently loaded 00:04:42.911 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:42.911 + true 00:04:42.911 + for D in $DRIVERS 00:04:42.911 + sudo modprobe ice 00:04:42.911 + exit 0 00:04:42.920 [Pipeline] } 00:04:42.938 [Pipeline] // withEnv 00:04:42.945 [Pipeline] } 00:04:42.961 [Pipeline] // stage 00:04:42.972 [Pipeline] catchError 00:04:42.974 [Pipeline] { 00:04:42.989 [Pipeline] timeout 00:04:42.990 Timeout set to expire in 1 hr 0 min 00:04:42.992 [Pipeline] { 00:04:43.007 [Pipeline] stage 00:04:43.009 [Pipeline] { (Tests) 00:04:43.024 [Pipeline] sh 00:04:43.305 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:43.305 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:43.305 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:43.305 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:43.305 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.305 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:43.305 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:43.305 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:43.305 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:43.305 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:43.305 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:43.305 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:43.305 + source /etc/os-release 00:04:43.305 ++ NAME='Fedora Linux' 00:04:43.305 ++ VERSION='39 (Cloud Edition)' 00:04:43.305 ++ ID=fedora 00:04:43.305 ++ VERSION_ID=39 00:04:43.305 ++ VERSION_CODENAME= 00:04:43.305 ++ PLATFORM_ID=platform:f39 00:04:43.305 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:43.305 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:43.305 ++ LOGO=fedora-logo-icon 00:04:43.305 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:43.305 ++ HOME_URL=https://fedoraproject.org/ 00:04:43.305 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:43.305 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:43.305 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:43.305 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:43.305 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:43.305 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:43.305 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:43.305 ++ SUPPORT_END=2024-11-12 00:04:43.306 ++ VARIANT='Cloud Edition' 00:04:43.306 ++ VARIANT_ID=cloud 00:04:43.306 + uname -a 00:04:43.306 Linux spdk-cyp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:43.306 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:45.210 Hugepages 00:04:45.210 node hugesize free / total 00:04:45.210 node0 1048576kB 0 / 0 00:04:45.210 node0 2048kB 0 / 0 00:04:45.210 node1 1048576kB 0 / 0 00:04:45.210 node1 2048kB 0 / 0 00:04:45.210 00:04:45.210 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.210 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:45.210 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:45.210 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:45.210 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:45.210 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:45.210 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:45.210 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:45.210 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:45.469 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:45.469 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:45.469 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:45.469 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:45.469 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:45.469 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:45.469 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:45.469 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:45.469 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:45.469 + rm -f /tmp/spdk-ld-path 00:04:45.469 + source autorun-spdk.conf 00:04:45.469 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:45.469 ++ SPDK_TEST_NVMF=1 00:04:45.469 ++ SPDK_TEST_NVME_CLI=1 00:04:45.469 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:45.469 ++ SPDK_TEST_NVMF_NICS=e810 00:04:45.469 ++ SPDK_TEST_VFIOUSER=1 00:04:45.469 ++ SPDK_RUN_UBSAN=1 00:04:45.469 ++ NET_TYPE=phy 00:04:45.469 ++ RUN_NIGHTLY=0 00:04:45.469 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:45.469 + [[ -n '' ]] 00:04:45.469 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:45.469 + for M in /var/spdk/build-*-manifest.txt 00:04:45.469 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:45.469 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:45.469 + for M in /var/spdk/build-*-manifest.txt 00:04:45.469 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:45.469 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:45.469 + for M in /var/spdk/build-*-manifest.txt 00:04:45.469 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:45.469 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:45.469 ++ uname 00:04:45.469 + [[ Linux == \L\i\n\u\x ]] 00:04:45.469 + sudo dmesg -T 00:04:45.469 + sudo dmesg --clear 00:04:45.469 + dmesg_pid=3575487 00:04:45.469 + [[ Fedora Linux == FreeBSD ]] 00:04:45.469 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:45.469 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:45.469 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:45.469 + [[ -x /usr/src/fio-static/fio ]] 00:04:45.469 + export FIO_BIN=/usr/src/fio-static/fio 00:04:45.469 + FIO_BIN=/usr/src/fio-static/fio 00:04:45.469 + sudo dmesg -Tw 00:04:45.469 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:45.469 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:45.469 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:45.469 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:45.469 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:45.469 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:45.469 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:45.469 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:45.469 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:45.469 14:24:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:45.469 14:24:52 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:45.469 14:24:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:45.469 14:24:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:45.469 14:24:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:04:45.469 14:24:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:45.470 14:24:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:04:45.470 14:24:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:04:45.470 14:24:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:04:45.470 14:24:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:04:45.470 14:24:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:45.470 14:24:52 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:45.470 14:24:52 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:45.470 14:24:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:45.470 14:24:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:45.470 14:24:52 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:45.470 14:24:52 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:45.470 14:24:52 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.470 14:24:52 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.470 14:24:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.470 14:24:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.470 14:24:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.470 14:24:52 -- paths/export.sh@5 -- $ export PATH 00:04:45.470 14:24:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.470 14:24:52 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:45.470 14:24:52 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:45.470 14:24:52 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732109092.XXXXXX 00:04:45.470 14:24:52 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732109092.6r9E0l 00:04:45.470 14:24:52 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:45.470 14:24:52 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:45.470 14:24:52 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:45.470 14:24:52 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:45.470 14:24:52 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:45.470 14:24:52 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:45.470 14:24:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:45.470 14:24:52 -- common/autotest_common.sh@10 -- $ set +x 00:04:45.470 14:24:52 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:45.470 14:24:52 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:45.470 14:24:52 -- pm/common@17 -- $ local monitor 00:04:45.470 14:24:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.470 14:24:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.470 14:24:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.470 14:24:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.470 14:24:52 -- pm/common@25 -- $ sleep 1 00:04:45.470 14:24:52 -- pm/common@21 -- $ date +%s 00:04:45.470 14:24:52 -- pm/common@21 -- $ date +%s 00:04:45.470 14:24:52 -- pm/common@21 -- $ date +%s 00:04:45.470 14:24:52 -- pm/common@21 -- $ date +%s 00:04:45.470 14:24:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732109092 00:04:45.470 14:24:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732109092 00:04:45.470 14:24:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732109092 00:04:45.470 14:24:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732109092 00:04:45.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732109092_collect-vmstat.pm.log 00:04:45.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732109092_collect-cpu-load.pm.log 00:04:45.470 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732109092_collect-cpu-temp.pm.log 00:04:45.730 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732109092_collect-bmc-pm.bmc.pm.log 00:04:46.669 14:24:53 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:46.669 14:24:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:46.669 14:24:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:46.669 14:24:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:46.669 14:24:53 -- spdk/autobuild.sh@16 -- $ date -u 00:04:46.669 Wed Nov 20 01:24:53 PM UTC 2024 00:04:46.669 14:24:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:46.669 v25.01-pre-248-ga361eb5e2 00:04:46.669 14:24:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:46.669 14:24:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:46.669 14:24:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:46.669 14:24:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:46.669 14:24:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:46.669 14:24:53 -- common/autotest_common.sh@10 -- $ set +x 00:04:46.669 ************************************ 00:04:46.669 START TEST ubsan 00:04:46.669 ************************************ 00:04:46.669 14:24:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:46.669 using ubsan 00:04:46.669 00:04:46.669 real 0m0.000s 00:04:46.669 user 0m0.000s 00:04:46.669 sys 0m0.000s 00:04:46.669 14:24:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:46.669 14:24:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:46.669 ************************************ 00:04:46.669 END TEST ubsan 00:04:46.669 ************************************ 00:04:46.669 14:24:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:46.669 14:24:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:46.669 14:24:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:46.669 14:24:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:46.669 14:24:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:46.669 14:24:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:46.669 14:24:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:46.669 14:24:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:46.669 14:24:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:46.669 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:46.669 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:46.929 Using 'verbs' RDMA provider 00:04:57.480 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:07.469 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:07.469 Creating mk/config.mk...done. 00:05:07.469 Creating mk/cc.flags.mk...done. 00:05:07.469 Type 'make' to build. 00:05:07.469 14:25:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:05:07.469 14:25:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:07.469 14:25:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:07.469 14:25:13 -- common/autotest_common.sh@10 -- $ set +x 00:05:07.469 ************************************ 00:05:07.469 START TEST make 00:05:07.469 ************************************ 00:05:07.469 14:25:13 make -- common/autotest_common.sh@1129 -- $ make -j144 00:05:07.469 make[1]: Nothing to be done for 'all'. 00:05:08.407 The Meson build system 00:05:08.407 Version: 1.5.0 00:05:08.407 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:08.407 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:08.407 Build type: native build 00:05:08.407 Project name: libvfio-user 00:05:08.407 Project version: 0.0.1 00:05:08.407 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:08.407 C linker for the host machine: cc ld.bfd 2.40-14 00:05:08.407 Host machine cpu family: x86_64 00:05:08.407 Host machine cpu: x86_64 00:05:08.407 Run-time dependency threads found: YES 00:05:08.407 Library dl found: YES 00:05:08.407 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:08.407 Run-time dependency json-c found: YES 0.17 00:05:08.407 Run-time dependency cmocka found: YES 1.1.7 00:05:08.407 Program pytest-3 found: NO 00:05:08.407 Program flake8 found: NO 00:05:08.407 Program misspell-fixer found: NO 00:05:08.407 Program restructuredtext-lint found: NO 00:05:08.407 Program valgrind found: YES (/usr/bin/valgrind) 00:05:08.407 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:08.407 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:08.407 Compiler for C supports arguments -Wwrite-strings: YES 00:05:08.407 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:08.407 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:08.407 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:08.407 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:08.407 Build targets in project: 8 00:05:08.407 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:08.407 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:08.407 00:05:08.407 libvfio-user 0.0.1 00:05:08.407 00:05:08.407 User defined options 00:05:08.407 buildtype : debug 00:05:08.407 default_library: shared 00:05:08.407 libdir : /usr/local/lib 00:05:08.407 00:05:08.407 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:08.666 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:08.666 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:08.666 [2/37] Compiling C object samples/null.p/null.c.o 00:05:08.666 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:08.666 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:08.666 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:08.666 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:08.666 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:08.666 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:08.666 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:08.666 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:08.666 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:08.666 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:08.666 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:08.666 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:08.666 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:08.926 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:08.926 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:08.926 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:08.926 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:08.926 [20/37] Compiling C object samples/server.p/server.c.o 00:05:08.926 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:08.926 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:08.926 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:08.926 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:08.926 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:08.926 [26/37] Compiling C object samples/client.p/client.c.o 00:05:08.926 [27/37] Linking target samples/client 00:05:08.926 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:08.926 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:08.926 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:05:08.926 [31/37] Linking target test/unit_tests 00:05:08.926 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:08.926 [33/37] Linking target samples/server 00:05:08.926 [34/37] Linking target samples/lspci 00:05:08.926 [35/37] Linking target samples/gpio-pci-idio-16 00:05:08.926 [36/37] Linking target samples/shadow_ioeventfd_server 00:05:08.926 [37/37] Linking target samples/null 00:05:08.926 INFO: autodetecting backend as ninja 00:05:08.926 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:08.926 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:09.497 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:09.497 ninja: no work to do. 00:05:12.793 The Meson build system 00:05:12.793 Version: 1.5.0 00:05:12.793 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:12.793 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:12.793 Build type: native build 00:05:12.793 Program cat found: YES (/usr/bin/cat) 00:05:12.793 Project name: DPDK 00:05:12.793 Project version: 24.03.0 00:05:12.793 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:12.793 C linker for the host machine: cc ld.bfd 2.40-14 00:05:12.793 Host machine cpu family: x86_64 00:05:12.793 Host machine cpu: x86_64 00:05:12.793 Message: ## Building in Developer Mode ## 00:05:12.793 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:12.793 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:12.793 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:12.793 Program python3 found: YES (/usr/bin/python3) 00:05:12.793 Program cat found: YES (/usr/bin/cat) 00:05:12.793 Compiler for C supports arguments -march=native: YES 00:05:12.793 Checking for size of "void *" : 8 00:05:12.793 Checking for size of "void *" : 8 (cached) 00:05:12.793 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:12.793 Library m found: YES 00:05:12.793 Library numa found: YES 00:05:12.793 Has header "numaif.h" : YES 00:05:12.793 Library fdt found: NO 00:05:12.793 Library execinfo found: NO 00:05:12.793 Has header "execinfo.h" : YES 00:05:12.793 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:12.793 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:12.793 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:12.793 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:12.793 Run-time dependency openssl found: YES 3.1.1 00:05:12.793 Run-time dependency libpcap found: YES 1.10.4 00:05:12.793 Has header "pcap.h" with dependency libpcap: YES 00:05:12.793 Compiler for C supports arguments -Wcast-qual: YES 00:05:12.793 Compiler for C supports arguments -Wdeprecated: YES 00:05:12.793 Compiler for C supports arguments -Wformat: YES 00:05:12.793 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:12.793 Compiler for C supports arguments -Wformat-security: NO 00:05:12.793 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:12.793 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:12.793 Compiler for C supports arguments -Wnested-externs: YES 00:05:12.793 Compiler for C supports arguments -Wold-style-definition: YES 00:05:12.793 Compiler for C supports arguments -Wpointer-arith: YES 00:05:12.793 Compiler for C supports arguments -Wsign-compare: YES 00:05:12.793 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:12.793 Compiler for C supports arguments -Wundef: YES 00:05:12.793 Compiler for C supports arguments -Wwrite-strings: YES 00:05:12.793 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:12.793 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:12.793 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:12.793 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:12.793 Program objdump found: YES (/usr/bin/objdump) 00:05:12.793 Compiler for C supports arguments -mavx512f: YES 00:05:12.793 Checking if "AVX512 checking" compiles: YES 00:05:12.793 Fetching value of define "__SSE4_2__" : 1 00:05:12.793 Fetching value of define "__AES__" : 1 00:05:12.793 Fetching value of define "__AVX__" : 1 00:05:12.793 Fetching value of define "__AVX2__" : 1 00:05:12.793 Fetching value of define "__AVX512BW__" : 1 00:05:12.793 Fetching value of define "__AVX512CD__" : 1 00:05:12.793 Fetching value of define "__AVX512DQ__" : 1 00:05:12.793 Fetching value of define "__AVX512F__" : 1 00:05:12.793 Fetching value of define "__AVX512VL__" : 1 00:05:12.793 Fetching value of define "__PCLMUL__" : 1 00:05:12.793 Fetching value of define "__RDRND__" : 1 00:05:12.793 Fetching value of define "__RDSEED__" : 1 00:05:12.793 Fetching value of define "__VPCLMULQDQ__" : 1 00:05:12.793 Fetching value of define "__znver1__" : (undefined) 00:05:12.793 Fetching value of define "__znver2__" : (undefined) 00:05:12.793 Fetching value of define "__znver3__" : (undefined) 00:05:12.793 Fetching value of define "__znver4__" : (undefined) 00:05:12.793 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:12.793 Message: lib/log: Defining dependency "log" 00:05:12.793 Message: lib/kvargs: Defining dependency "kvargs" 00:05:12.793 Message: lib/telemetry: Defining dependency "telemetry" 00:05:12.793 Checking for function "getentropy" : NO 00:05:12.793 Message: lib/eal: Defining dependency "eal" 00:05:12.793 Message: lib/ring: Defining dependency "ring" 00:05:12.793 Message: lib/rcu: Defining dependency "rcu" 00:05:12.793 Message: lib/mempool: Defining dependency "mempool" 00:05:12.793 Message: lib/mbuf: Defining dependency "mbuf" 00:05:12.793 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:12.793 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:12.793 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:12.793 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:12.793 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:12.793 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:05:12.793 Compiler for C supports arguments -mpclmul: YES 00:05:12.793 Compiler for C supports arguments -maes: YES 00:05:12.793 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:12.793 Compiler for C supports arguments -mavx512bw: YES 00:05:12.793 Compiler for C supports arguments -mavx512dq: YES 00:05:12.793 Compiler for C supports arguments -mavx512vl: YES 00:05:12.793 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:12.793 Compiler for C supports arguments -mavx2: YES 00:05:12.793 Compiler for C supports arguments -mavx: YES 00:05:12.793 Message: lib/net: Defining dependency "net" 00:05:12.793 Message: lib/meter: Defining dependency "meter" 00:05:12.793 Message: lib/ethdev: Defining dependency "ethdev" 00:05:12.793 Message: lib/pci: Defining dependency "pci" 00:05:12.793 Message: lib/cmdline: Defining dependency "cmdline" 00:05:12.793 Message: lib/hash: Defining dependency "hash" 00:05:12.793 Message: lib/timer: Defining dependency "timer" 00:05:12.793 Message: lib/compressdev: Defining dependency "compressdev" 00:05:12.793 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:12.793 Message: lib/dmadev: Defining dependency "dmadev" 00:05:12.793 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:12.793 Message: lib/power: Defining dependency "power" 00:05:12.793 Message: lib/reorder: Defining dependency "reorder" 00:05:12.793 Message: lib/security: Defining dependency "security" 00:05:12.793 Has header "linux/userfaultfd.h" : YES 00:05:12.793 Has header "linux/vduse.h" : YES 00:05:12.793 Message: lib/vhost: Defining dependency "vhost" 00:05:12.793 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:12.793 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:12.793 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:12.793 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:12.793 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:12.793 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:12.793 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:12.793 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:12.793 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:12.793 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:12.793 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:12.793 Configuring doxy-api-html.conf using configuration 00:05:12.793 Configuring doxy-api-man.conf using configuration 00:05:12.793 Program mandb found: YES (/usr/bin/mandb) 00:05:12.793 Program sphinx-build found: NO 00:05:12.793 Configuring rte_build_config.h using configuration 00:05:12.793 Message: 00:05:12.793 ================= 00:05:12.793 Applications Enabled 00:05:12.793 ================= 00:05:12.793 00:05:12.793 apps: 00:05:12.793 00:05:12.793 00:05:12.793 Message: 00:05:12.793 ================= 00:05:12.793 Libraries Enabled 00:05:12.793 ================= 00:05:12.793 00:05:12.793 libs: 00:05:12.793 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:12.794 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:12.794 cryptodev, dmadev, power, reorder, security, vhost, 00:05:12.794 00:05:12.794 Message: 00:05:12.794 =============== 00:05:12.794 Drivers Enabled 00:05:12.794 =============== 00:05:12.794 00:05:12.794 common: 00:05:12.794 00:05:12.794 bus: 00:05:12.794 pci, vdev, 00:05:12.794 mempool: 00:05:12.794 ring, 00:05:12.794 dma: 00:05:12.794 00:05:12.794 net: 00:05:12.794 00:05:12.794 crypto: 00:05:12.794 00:05:12.794 compress: 00:05:12.794 00:05:12.794 vdpa: 00:05:12.794 00:05:12.794 00:05:12.794 Message: 00:05:12.794 ================= 00:05:12.794 Content Skipped 00:05:12.794 ================= 00:05:12.794 00:05:12.794 apps: 00:05:12.794 dumpcap: explicitly disabled via build config 00:05:12.794 graph: explicitly disabled via build config 00:05:12.794 pdump: explicitly disabled via build config 00:05:12.794 proc-info: explicitly disabled via build config 00:05:12.794 test-acl: explicitly disabled via build config 00:05:12.794 test-bbdev: explicitly disabled via build config 00:05:12.794 test-cmdline: explicitly disabled via build config 00:05:12.794 test-compress-perf: explicitly disabled via build config 00:05:12.794 test-crypto-perf: explicitly disabled via build config 00:05:12.794 test-dma-perf: explicitly disabled via build config 00:05:12.794 test-eventdev: explicitly disabled via build config 00:05:12.794 test-fib: explicitly disabled via build config 00:05:12.794 test-flow-perf: explicitly disabled via build config 00:05:12.794 test-gpudev: explicitly disabled via build config 00:05:12.794 test-mldev: explicitly disabled via build config 00:05:12.794 test-pipeline: explicitly disabled via build config 00:05:12.794 test-pmd: explicitly disabled via build config 00:05:12.794 test-regex: explicitly disabled via build config 00:05:12.794 test-sad: explicitly disabled via build config 00:05:12.794 test-security-perf: explicitly disabled via build config 00:05:12.794 00:05:12.794 libs: 00:05:12.794 argparse: explicitly disabled via build config 00:05:12.794 metrics: explicitly disabled via build config 00:05:12.794 acl: explicitly disabled via build config 00:05:12.794 bbdev: explicitly disabled via build config 00:05:12.794 bitratestats: explicitly disabled via build config 00:05:12.794 bpf: explicitly disabled via build config 00:05:12.794 cfgfile: explicitly disabled via build config 00:05:12.794 distributor: explicitly disabled via build config 00:05:12.794 efd: explicitly disabled via build config 00:05:12.794 eventdev: explicitly disabled via build config 00:05:12.794 dispatcher: explicitly disabled via build config 00:05:12.794 gpudev: explicitly disabled via build config 00:05:12.794 gro: explicitly disabled via build config 00:05:12.794 gso: explicitly disabled via build config 00:05:12.794 ip_frag: explicitly disabled via build config 00:05:12.794 jobstats: explicitly disabled via build config 00:05:12.794 latencystats: explicitly disabled via build config 00:05:12.794 lpm: explicitly disabled via build config 00:05:12.794 member: explicitly disabled via build config 00:05:12.794 pcapng: explicitly disabled via build config 00:05:12.794 rawdev: explicitly disabled via build config 00:05:12.794 regexdev: explicitly disabled via build config 00:05:12.794 mldev: explicitly disabled via build config 00:05:12.794 rib: explicitly disabled via build config 00:05:12.794 sched: explicitly disabled via build config 00:05:12.794 stack: explicitly disabled via build config 00:05:12.794 ipsec: explicitly disabled via build config 00:05:12.794 pdcp: explicitly disabled via build config 00:05:12.794 fib: explicitly disabled via build config 00:05:12.794 port: explicitly disabled via build config 00:05:12.794 pdump: explicitly disabled via build config 00:05:12.794 table: explicitly disabled via build config 00:05:12.794 pipeline: explicitly disabled via build config 00:05:12.794 graph: explicitly disabled via build config 00:05:12.794 node: explicitly disabled via build config 00:05:12.794 00:05:12.794 drivers: 00:05:12.794 common/cpt: not in enabled drivers build config 00:05:12.794 common/dpaax: not in enabled drivers build config 00:05:12.794 common/iavf: not in enabled drivers build config 00:05:12.794 common/idpf: not in enabled drivers build config 00:05:12.794 common/ionic: not in enabled drivers build config 00:05:12.794 common/mvep: not in enabled drivers build config 00:05:12.794 common/octeontx: not in enabled drivers build config 00:05:12.794 bus/auxiliary: not in enabled drivers build config 00:05:12.794 bus/cdx: not in enabled drivers build config 00:05:12.794 bus/dpaa: not in enabled drivers build config 00:05:12.794 bus/fslmc: not in enabled drivers build config 00:05:12.794 bus/ifpga: not in enabled drivers build config 00:05:12.794 bus/platform: not in enabled drivers build config 00:05:12.794 bus/uacce: not in enabled drivers build config 00:05:12.794 bus/vmbus: not in enabled drivers build config 00:05:12.794 common/cnxk: not in enabled drivers build config 00:05:12.794 common/mlx5: not in enabled drivers build config 00:05:12.794 common/nfp: not in enabled drivers build config 00:05:12.794 common/nitrox: not in enabled drivers build config 00:05:12.794 common/qat: not in enabled drivers build config 00:05:12.794 common/sfc_efx: not in enabled drivers build config 00:05:12.794 mempool/bucket: not in enabled drivers build config 00:05:12.794 mempool/cnxk: not in enabled drivers build config 00:05:12.794 mempool/dpaa: not in enabled drivers build config 00:05:12.794 mempool/dpaa2: not in enabled drivers build config 00:05:12.794 mempool/octeontx: not in enabled drivers build config 00:05:12.794 mempool/stack: not in enabled drivers build config 00:05:12.794 dma/cnxk: not in enabled drivers build config 00:05:12.794 dma/dpaa: not in enabled drivers build config 00:05:12.794 dma/dpaa2: not in enabled drivers build config 00:05:12.794 dma/hisilicon: not in enabled drivers build config 00:05:12.794 dma/idxd: not in enabled drivers build config 00:05:12.794 dma/ioat: not in enabled drivers build config 00:05:12.794 dma/skeleton: not in enabled drivers build config 00:05:12.794 net/af_packet: not in enabled drivers build config 00:05:12.794 net/af_xdp: not in enabled drivers build config 00:05:12.794 net/ark: not in enabled drivers build config 00:05:12.794 net/atlantic: not in enabled drivers build config 00:05:12.794 net/avp: not in enabled drivers build config 00:05:12.794 net/axgbe: not in enabled drivers build config 00:05:12.794 net/bnx2x: not in enabled drivers build config 00:05:12.794 net/bnxt: not in enabled drivers build config 00:05:12.794 net/bonding: not in enabled drivers build config 00:05:12.794 net/cnxk: not in enabled drivers build config 00:05:12.794 net/cpfl: not in enabled drivers build config 00:05:12.794 net/cxgbe: not in enabled drivers build config 00:05:12.794 net/dpaa: not in enabled drivers build config 00:05:12.794 net/dpaa2: not in enabled drivers build config 00:05:12.794 net/e1000: not in enabled drivers build config 00:05:12.794 net/ena: not in enabled drivers build config 00:05:12.794 net/enetc: not in enabled drivers build config 00:05:12.794 net/enetfec: not in enabled drivers build config 00:05:12.794 net/enic: not in enabled drivers build config 00:05:12.794 net/failsafe: not in enabled drivers build config 00:05:12.794 net/fm10k: not in enabled drivers build config 00:05:12.794 net/gve: not in enabled drivers build config 00:05:12.794 net/hinic: not in enabled drivers build config 00:05:12.794 net/hns3: not in enabled drivers build config 00:05:12.794 net/i40e: not in enabled drivers build config 00:05:12.794 net/iavf: not in enabled drivers build config 00:05:12.794 net/ice: not in enabled drivers build config 00:05:12.794 net/idpf: not in enabled drivers build config 00:05:12.794 net/igc: not in enabled drivers build config 00:05:12.794 net/ionic: not in enabled drivers build config 00:05:12.794 net/ipn3ke: not in enabled drivers build config 00:05:12.794 net/ixgbe: not in enabled drivers build config 00:05:12.794 net/mana: not in enabled drivers build config 00:05:12.794 net/memif: not in enabled drivers build config 00:05:12.794 net/mlx4: not in enabled drivers build config 00:05:12.794 net/mlx5: not in enabled drivers build config 00:05:12.794 net/mvneta: not in enabled drivers build config 00:05:12.794 net/mvpp2: not in enabled drivers build config 00:05:12.795 net/netvsc: not in enabled drivers build config 00:05:12.795 net/nfb: not in enabled drivers build config 00:05:12.795 net/nfp: not in enabled drivers build config 00:05:12.795 net/ngbe: not in enabled drivers build config 00:05:12.795 net/null: not in enabled drivers build config 00:05:12.795 net/octeontx: not in enabled drivers build config 00:05:12.795 net/octeon_ep: not in enabled drivers build config 00:05:12.795 net/pcap: not in enabled drivers build config 00:05:12.795 net/pfe: not in enabled drivers build config 00:05:12.795 net/qede: not in enabled drivers build config 00:05:12.795 net/ring: not in enabled drivers build config 00:05:12.795 net/sfc: not in enabled drivers build config 00:05:12.795 net/softnic: not in enabled drivers build config 00:05:12.795 net/tap: not in enabled drivers build config 00:05:12.795 net/thunderx: not in enabled drivers build config 00:05:12.795 net/txgbe: not in enabled drivers build config 00:05:12.795 net/vdev_netvsc: not in enabled drivers build config 00:05:12.795 net/vhost: not in enabled drivers build config 00:05:12.795 net/virtio: not in enabled drivers build config 00:05:12.795 net/vmxnet3: not in enabled drivers build config 00:05:12.795 raw/*: missing internal dependency, "rawdev" 00:05:12.795 crypto/armv8: not in enabled drivers build config 00:05:12.795 crypto/bcmfs: not in enabled drivers build config 00:05:12.795 crypto/caam_jr: not in enabled drivers build config 00:05:12.795 crypto/ccp: not in enabled drivers build config 00:05:12.795 crypto/cnxk: not in enabled drivers build config 00:05:12.795 crypto/dpaa_sec: not in enabled drivers build config 00:05:12.795 crypto/dpaa2_sec: not in enabled drivers build config 00:05:12.795 crypto/ipsec_mb: not in enabled drivers build config 00:05:12.795 crypto/mlx5: not in enabled drivers build config 00:05:12.795 crypto/mvsam: not in enabled drivers build config 00:05:12.795 crypto/nitrox: not in enabled drivers build config 00:05:12.795 crypto/null: not in enabled drivers build config 00:05:12.795 crypto/octeontx: not in enabled drivers build config 00:05:12.795 crypto/openssl: not in enabled drivers build config 00:05:12.795 crypto/scheduler: not in enabled drivers build config 00:05:12.795 crypto/uadk: not in enabled drivers build config 00:05:12.795 crypto/virtio: not in enabled drivers build config 00:05:12.795 compress/isal: not in enabled drivers build config 00:05:12.795 compress/mlx5: not in enabled drivers build config 00:05:12.795 compress/nitrox: not in enabled drivers build config 00:05:12.795 compress/octeontx: not in enabled drivers build config 00:05:12.795 compress/zlib: not in enabled drivers build config 00:05:12.795 regex/*: missing internal dependency, "regexdev" 00:05:12.795 ml/*: missing internal dependency, "mldev" 00:05:12.795 vdpa/ifc: not in enabled drivers build config 00:05:12.795 vdpa/mlx5: not in enabled drivers build config 00:05:12.795 vdpa/nfp: not in enabled drivers build config 00:05:12.795 vdpa/sfc: not in enabled drivers build config 00:05:12.795 event/*: missing internal dependency, "eventdev" 00:05:12.795 baseband/*: missing internal dependency, "bbdev" 00:05:12.795 gpu/*: missing internal dependency, "gpudev" 00:05:12.795 00:05:12.795 00:05:12.795 Build targets in project: 84 00:05:12.795 00:05:12.795 DPDK 24.03.0 00:05:12.795 00:05:12.795 User defined options 00:05:12.795 buildtype : debug 00:05:12.795 default_library : shared 00:05:12.795 libdir : lib 00:05:12.795 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:12.795 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:12.795 c_link_args : 00:05:12.795 cpu_instruction_set: native 00:05:12.795 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:05:12.795 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:05:12.795 enable_docs : false 00:05:12.795 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:12.795 enable_kmods : false 00:05:12.795 max_lcores : 128 00:05:12.795 tests : false 00:05:12.795 00:05:12.795 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:12.795 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:13.062 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:13.062 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:13.062 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:13.062 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:13.062 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:13.062 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:13.062 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:13.062 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:13.062 [9/267] Linking static target lib/librte_log.a 00:05:13.062 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:13.062 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:13.062 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:13.062 [13/267] Linking static target lib/librte_kvargs.a 00:05:13.062 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:13.062 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:13.062 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:13.062 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:13.062 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:13.062 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:13.062 [20/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:13.062 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:13.062 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:13.062 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:13.062 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:13.322 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:13.322 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:13.322 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:13.322 [28/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:13.322 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:13.322 [30/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:13.322 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:13.322 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:13.322 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:13.322 [34/267] Linking static target lib/librte_pci.a 00:05:13.322 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:13.322 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:13.322 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:13.322 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:13.322 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:13.322 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:13.322 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:13.322 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:13.322 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:13.322 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:13.582 [45/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:13.582 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:13.582 [47/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.582 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:13.582 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:13.582 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:13.582 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:13.582 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:13.582 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:13.582 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:13.582 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:13.582 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:13.582 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:13.582 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:13.582 [59/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:13.582 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:13.582 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:13.582 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:13.582 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:13.582 [64/267] Linking static target lib/librte_telemetry.a 00:05:13.582 [65/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.582 [66/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:13.582 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:13.582 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:13.582 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:13.582 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:13.582 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:13.582 [72/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:13.582 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:05:13.582 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:13.582 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:13.582 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:13.583 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:13.583 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:13.583 [79/267] Linking static target lib/librte_meter.a 00:05:13.583 [80/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:13.583 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:13.583 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:13.583 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:13.583 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:13.583 [85/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:13.583 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:13.583 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:13.583 [88/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:13.583 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:13.583 [90/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:13.583 [91/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:13.583 [92/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:13.583 [93/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:13.583 [94/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:13.583 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:13.583 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:13.583 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:13.583 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:13.583 [99/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:13.583 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:13.583 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:13.583 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:13.583 [103/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:13.583 [104/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:13.583 [105/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:13.583 [106/267] Linking static target lib/librte_ring.a 00:05:13.583 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:13.583 [108/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:13.583 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:13.583 [110/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:13.583 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:13.583 [112/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:13.583 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:13.583 [114/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:13.583 [115/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:13.583 [116/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:13.583 [117/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:13.583 [118/267] Linking static target lib/librte_cmdline.a 00:05:13.583 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:13.583 [120/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:13.583 [121/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:13.583 [122/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:13.583 [123/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:13.583 [124/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:13.583 [125/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:13.583 [126/267] Linking static target lib/librte_compressdev.a 00:05:13.583 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:13.583 [128/267] Linking static target lib/librte_timer.a 00:05:13.583 [129/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:13.583 [130/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:13.583 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:13.583 [132/267] Linking static target lib/librte_dmadev.a 00:05:13.583 [133/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:13.583 [134/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:13.583 [135/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:13.583 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:13.583 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:13.583 [138/267] Linking static target lib/librte_rcu.a 00:05:13.583 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:13.583 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:13.583 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:13.583 [142/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.583 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:13.583 [144/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:13.583 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:13.583 [146/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:13.583 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:13.583 [148/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:13.583 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:13.583 [150/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:13.583 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:13.583 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:13.583 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:13.583 [154/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:13.583 [155/267] Linking static target lib/librte_power.a 00:05:13.583 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:13.583 [157/267] Linking target lib/librte_log.so.24.1 00:05:13.583 [158/267] Linking static target lib/librte_mempool.a 00:05:13.583 [159/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:13.583 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:13.583 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:13.583 [162/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:13.583 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:13.583 [164/267] Linking static target lib/librte_reorder.a 00:05:13.583 [165/267] Linking static target lib/librte_eal.a 00:05:13.583 [166/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:13.583 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:13.583 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:13.583 [169/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:13.583 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:13.844 [171/267] Linking static target lib/librte_net.a 00:05:13.844 [172/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:13.844 [173/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.844 [174/267] Linking static target lib/librte_mbuf.a 00:05:13.844 [175/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:13.844 [176/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:13.844 [177/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:13.844 [178/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:13.844 [179/267] Linking static target lib/librte_security.a 00:05:13.844 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:13.844 [181/267] Linking target lib/librte_kvargs.so.24.1 00:05:13.844 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:13.844 [183/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:13.844 [184/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:13.844 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:13.844 [186/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:13.844 [187/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.844 [188/267] Linking static target lib/librte_hash.a 00:05:13.844 [189/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:13.844 [190/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:13.844 [191/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:13.844 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:13.844 [193/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.844 [194/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.844 [195/267] Linking static target drivers/librte_bus_vdev.a 00:05:13.844 [196/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:13.844 [197/267] Linking target lib/librte_telemetry.so.24.1 00:05:13.844 [198/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:13.844 [199/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.844 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.844 [201/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:13.844 [202/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.844 [203/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:13.844 [204/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:13.844 [205/267] Linking static target drivers/librte_bus_pci.a 00:05:13.844 [206/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:13.844 [207/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:13.844 [208/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:13.844 [209/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:13.844 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:13.844 [211/267] Linking static target drivers/librte_mempool_ring.a 00:05:14.104 [212/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:14.104 [213/267] Linking static target lib/librte_cryptodev.a 00:05:14.104 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.104 [215/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.104 [216/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.104 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.104 [218/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.104 [219/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.104 [220/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.104 [221/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.363 [222/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.363 [223/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:14.363 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.363 [225/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:14.363 [226/267] Linking static target lib/librte_ethdev.a 00:05:14.930 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.189 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:15.189 [229/267] Linking static target lib/librte_vhost.a 00:05:16.124 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.415 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.415 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.415 [233/267] Linking target lib/librte_eal.so.24.1 00:05:19.415 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:19.415 [235/267] Linking target lib/librte_meter.so.24.1 00:05:19.415 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:05:19.415 [237/267] Linking target lib/librte_ring.so.24.1 00:05:19.415 [238/267] Linking target lib/librte_timer.so.24.1 00:05:19.415 [239/267] Linking target lib/librte_dmadev.so.24.1 00:05:19.415 [240/267] Linking target lib/librte_pci.so.24.1 00:05:19.415 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:19.415 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:19.415 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:19.415 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:19.415 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:19.415 [246/267] Linking target lib/librte_mempool.so.24.1 00:05:19.415 [247/267] Linking target lib/librte_rcu.so.24.1 00:05:19.415 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:05:19.415 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:19.415 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:19.415 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:05:19.415 [252/267] Linking target lib/librte_mbuf.so.24.1 00:05:19.415 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:19.415 [254/267] Linking target lib/librte_reorder.so.24.1 00:05:19.415 [255/267] Linking target lib/librte_compressdev.so.24.1 00:05:19.415 [256/267] Linking target lib/librte_net.so.24.1 00:05:19.415 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:05:19.674 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:19.674 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:19.674 [260/267] Linking target lib/librte_security.so.24.1 00:05:19.674 [261/267] Linking target lib/librte_hash.so.24.1 00:05:19.674 [262/267] Linking target lib/librte_cmdline.so.24.1 00:05:19.674 [263/267] Linking target lib/librte_ethdev.so.24.1 00:05:19.674 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:19.674 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:19.674 [266/267] Linking target lib/librte_power.so.24.1 00:05:19.674 [267/267] Linking target lib/librte_vhost.so.24.1 00:05:19.674 INFO: autodetecting backend as ninja 00:05:19.674 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:05:31.876 CC lib/log/log.o 00:05:31.876 CC lib/ut_mock/mock.o 00:05:31.876 CC lib/ut/ut.o 00:05:31.876 CC lib/log/log_flags.o 00:05:31.876 CC lib/log/log_deprecated.o 00:05:31.876 LIB libspdk_ut.a 00:05:31.876 SO libspdk_ut.so.2.0 00:05:31.876 LIB libspdk_ut_mock.a 00:05:31.876 LIB libspdk_log.a 00:05:31.876 SO libspdk_ut_mock.so.6.0 00:05:31.876 SYMLINK libspdk_ut.so 00:05:31.876 SO libspdk_log.so.7.1 00:05:31.876 SYMLINK libspdk_ut_mock.so 00:05:31.876 SYMLINK libspdk_log.so 00:05:31.876 CC lib/util/base64.o 00:05:31.876 CC lib/util/bit_array.o 00:05:31.876 CC lib/dma/dma.o 00:05:31.876 CC lib/util/crc16.o 00:05:31.876 CC lib/util/cpuset.o 00:05:31.876 CC lib/util/crc32c.o 00:05:31.876 CC lib/util/crc32.o 00:05:31.876 CC lib/util/crc32_ieee.o 00:05:31.876 CC lib/util/dif.o 00:05:31.876 CC lib/util/crc64.o 00:05:31.876 CC lib/util/fd.o 00:05:31.876 CC lib/util/file.o 00:05:31.876 CXX lib/trace_parser/trace.o 00:05:31.876 CC lib/util/fd_group.o 00:05:31.876 CC lib/util/hexlify.o 00:05:31.876 CC lib/ioat/ioat.o 00:05:31.876 CC lib/util/iov.o 00:05:31.876 CC lib/util/net.o 00:05:31.876 CC lib/util/math.o 00:05:31.876 CC lib/util/string.o 00:05:31.876 CC lib/util/pipe.o 00:05:31.876 CC lib/util/uuid.o 00:05:31.876 CC lib/util/strerror_tls.o 00:05:31.876 CC lib/util/xor.o 00:05:31.876 CC lib/util/md5.o 00:05:31.876 CC lib/util/zipf.o 00:05:31.876 CC lib/vfio_user/host/vfio_user_pci.o 00:05:31.876 CC lib/vfio_user/host/vfio_user.o 00:05:31.876 LIB libspdk_dma.a 00:05:31.876 SO libspdk_dma.so.5.0 00:05:31.876 SYMLINK libspdk_dma.so 00:05:31.876 LIB libspdk_ioat.a 00:05:31.876 SO libspdk_ioat.so.7.0 00:05:31.876 LIB libspdk_vfio_user.a 00:05:31.876 SYMLINK libspdk_ioat.so 00:05:31.876 SO libspdk_vfio_user.so.5.0 00:05:31.876 SYMLINK libspdk_vfio_user.so 00:05:31.876 LIB libspdk_util.a 00:05:31.876 SO libspdk_util.so.10.1 00:05:31.876 SYMLINK libspdk_util.so 00:05:31.876 LIB libspdk_trace_parser.a 00:05:31.876 SO libspdk_trace_parser.so.6.0 00:05:32.136 SYMLINK libspdk_trace_parser.so 00:05:32.136 CC lib/json/json_util.o 00:05:32.137 CC lib/json/json_parse.o 00:05:32.137 CC lib/json/json_write.o 00:05:32.137 CC lib/idxd/idxd_user.o 00:05:32.137 CC lib/idxd/idxd.o 00:05:32.137 CC lib/rdma_utils/rdma_utils.o 00:05:32.137 CC lib/idxd/idxd_kernel.o 00:05:32.137 CC lib/conf/conf.o 00:05:32.137 CC lib/vmd/vmd.o 00:05:32.137 CC lib/env_dpdk/env.o 00:05:32.137 CC lib/env_dpdk/memory.o 00:05:32.137 CC lib/vmd/led.o 00:05:32.137 CC lib/env_dpdk/pci.o 00:05:32.137 CC lib/env_dpdk/init.o 00:05:32.137 CC lib/env_dpdk/pci_ioat.o 00:05:32.137 CC lib/env_dpdk/threads.o 00:05:32.137 CC lib/env_dpdk/pci_virtio.o 00:05:32.137 CC lib/env_dpdk/pci_vmd.o 00:05:32.137 CC lib/env_dpdk/pci_idxd.o 00:05:32.137 CC lib/env_dpdk/sigbus_handler.o 00:05:32.137 CC lib/env_dpdk/pci_event.o 00:05:32.137 CC lib/env_dpdk/pci_dpdk.o 00:05:32.137 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:32.137 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:32.137 LIB libspdk_rdma_utils.a 00:05:32.137 SO libspdk_rdma_utils.so.1.0 00:05:32.137 LIB libspdk_conf.a 00:05:32.396 SO libspdk_conf.so.6.0 00:05:32.396 SYMLINK libspdk_rdma_utils.so 00:05:32.396 LIB libspdk_json.a 00:05:32.396 SYMLINK libspdk_conf.so 00:05:32.396 SO libspdk_json.so.6.0 00:05:32.396 SYMLINK libspdk_json.so 00:05:32.396 CC lib/rdma_provider/common.o 00:05:32.396 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:32.655 LIB libspdk_idxd.a 00:05:32.655 CC lib/jsonrpc/jsonrpc_server.o 00:05:32.655 CC lib/jsonrpc/jsonrpc_client.o 00:05:32.655 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:32.655 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:32.655 SO libspdk_idxd.so.12.1 00:05:32.655 LIB libspdk_vmd.a 00:05:32.655 SYMLINK libspdk_idxd.so 00:05:32.655 SO libspdk_vmd.so.6.0 00:05:32.655 SYMLINK libspdk_vmd.so 00:05:32.655 LIB libspdk_rdma_provider.a 00:05:32.655 SO libspdk_rdma_provider.so.7.0 00:05:32.655 SYMLINK libspdk_rdma_provider.so 00:05:32.655 LIB libspdk_jsonrpc.a 00:05:32.655 SO libspdk_jsonrpc.so.6.0 00:05:32.915 SYMLINK libspdk_jsonrpc.so 00:05:32.915 LIB libspdk_env_dpdk.a 00:05:32.915 SO libspdk_env_dpdk.so.15.1 00:05:32.915 CC lib/rpc/rpc.o 00:05:32.915 SYMLINK libspdk_env_dpdk.so 00:05:33.175 LIB libspdk_rpc.a 00:05:33.175 SO libspdk_rpc.so.6.0 00:05:33.175 SYMLINK libspdk_rpc.so 00:05:33.434 CC lib/trace/trace.o 00:05:33.434 CC lib/trace/trace_flags.o 00:05:33.434 CC lib/trace/trace_rpc.o 00:05:33.434 CC lib/notify/notify.o 00:05:33.434 CC lib/keyring/keyring_rpc.o 00:05:33.434 CC lib/notify/notify_rpc.o 00:05:33.434 CC lib/keyring/keyring.o 00:05:33.693 LIB libspdk_notify.a 00:05:33.693 SO libspdk_notify.so.6.0 00:05:33.693 LIB libspdk_keyring.a 00:05:33.693 LIB libspdk_trace.a 00:05:33.693 SYMLINK libspdk_notify.so 00:05:33.693 SO libspdk_keyring.so.2.0 00:05:33.693 SO libspdk_trace.so.11.0 00:05:33.693 SYMLINK libspdk_keyring.so 00:05:33.693 SYMLINK libspdk_trace.so 00:05:33.951 CC lib/sock/sock.o 00:05:33.951 CC lib/sock/sock_rpc.o 00:05:33.951 CC lib/thread/thread.o 00:05:33.951 CC lib/thread/iobuf.o 00:05:34.210 LIB libspdk_sock.a 00:05:34.210 SO libspdk_sock.so.10.0 00:05:34.469 SYMLINK libspdk_sock.so 00:05:34.469 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:34.469 CC lib/nvme/nvme_ctrlr.o 00:05:34.469 CC lib/nvme/nvme_fabric.o 00:05:34.469 CC lib/nvme/nvme_ns_cmd.o 00:05:34.469 CC lib/nvme/nvme_ns.o 00:05:34.469 CC lib/nvme/nvme_pcie_common.o 00:05:34.469 CC lib/nvme/nvme_pcie.o 00:05:34.469 CC lib/nvme/nvme_qpair.o 00:05:34.469 CC lib/nvme/nvme.o 00:05:34.469 CC lib/nvme/nvme_transport.o 00:05:34.469 CC lib/nvme/nvme_quirks.o 00:05:34.469 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:34.469 CC lib/nvme/nvme_discovery.o 00:05:34.469 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:34.469 CC lib/nvme/nvme_opal.o 00:05:34.469 CC lib/nvme/nvme_tcp.o 00:05:34.469 CC lib/nvme/nvme_io_msg.o 00:05:34.469 CC lib/nvme/nvme_poll_group.o 00:05:34.469 CC lib/nvme/nvme_zns.o 00:05:34.469 CC lib/nvme/nvme_stubs.o 00:05:34.469 CC lib/nvme/nvme_cuse.o 00:05:34.469 CC lib/nvme/nvme_auth.o 00:05:34.469 CC lib/nvme/nvme_rdma.o 00:05:34.469 CC lib/nvme/nvme_vfio_user.o 00:05:35.036 LIB libspdk_thread.a 00:05:35.036 SO libspdk_thread.so.11.0 00:05:35.036 SYMLINK libspdk_thread.so 00:05:35.295 CC lib/blob/blobstore.o 00:05:35.295 CC lib/blob/request.o 00:05:35.295 CC lib/blob/zeroes.o 00:05:35.295 CC lib/blob/blob_bs_dev.o 00:05:35.295 CC lib/accel/accel.o 00:05:35.295 CC lib/accel/accel_rpc.o 00:05:35.295 CC lib/accel/accel_sw.o 00:05:35.295 CC lib/vfu_tgt/tgt_endpoint.o 00:05:35.295 CC lib/vfu_tgt/tgt_rpc.o 00:05:35.295 CC lib/init/json_config.o 00:05:35.295 CC lib/init/subsystem_rpc.o 00:05:35.295 CC lib/init/subsystem.o 00:05:35.295 CC lib/init/rpc.o 00:05:35.295 CC lib/fsdev/fsdev.o 00:05:35.295 CC lib/fsdev/fsdev_io.o 00:05:35.295 CC lib/fsdev/fsdev_rpc.o 00:05:35.295 CC lib/virtio/virtio_vhost_user.o 00:05:35.295 CC lib/virtio/virtio.o 00:05:35.295 CC lib/virtio/virtio_vfio_user.o 00:05:35.295 CC lib/virtio/virtio_pci.o 00:05:35.555 LIB libspdk_init.a 00:05:35.555 SO libspdk_init.so.6.0 00:05:35.555 LIB libspdk_vfu_tgt.a 00:05:35.555 LIB libspdk_virtio.a 00:05:35.555 SYMLINK libspdk_init.so 00:05:35.555 SO libspdk_vfu_tgt.so.3.0 00:05:35.555 SO libspdk_virtio.so.7.0 00:05:35.555 SYMLINK libspdk_virtio.so 00:05:35.555 SYMLINK libspdk_vfu_tgt.so 00:05:35.814 CC lib/event/app.o 00:05:35.814 CC lib/event/reactor.o 00:05:35.814 CC lib/event/log_rpc.o 00:05:35.814 CC lib/event/app_rpc.o 00:05:35.814 CC lib/event/scheduler_static.o 00:05:35.814 LIB libspdk_fsdev.a 00:05:35.814 SO libspdk_fsdev.so.2.0 00:05:35.814 SYMLINK libspdk_fsdev.so 00:05:36.073 LIB libspdk_accel.a 00:05:36.073 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:36.073 SO libspdk_accel.so.16.0 00:05:36.073 LIB libspdk_event.a 00:05:36.073 SO libspdk_event.so.14.0 00:05:36.073 SYMLINK libspdk_accel.so 00:05:36.073 SYMLINK libspdk_event.so 00:05:36.331 LIB libspdk_nvme.a 00:05:36.331 CC lib/bdev/bdev.o 00:05:36.331 CC lib/bdev/bdev_rpc.o 00:05:36.331 CC lib/bdev/part.o 00:05:36.331 CC lib/bdev/bdev_zone.o 00:05:36.331 CC lib/bdev/scsi_nvme.o 00:05:36.331 SO libspdk_nvme.so.15.0 00:05:36.589 SYMLINK libspdk_nvme.so 00:05:36.589 LIB libspdk_fuse_dispatcher.a 00:05:36.589 SO libspdk_fuse_dispatcher.so.1.0 00:05:36.589 SYMLINK libspdk_fuse_dispatcher.so 00:05:37.157 LIB libspdk_blob.a 00:05:37.157 SO libspdk_blob.so.11.0 00:05:37.157 SYMLINK libspdk_blob.so 00:05:37.415 CC lib/lvol/lvol.o 00:05:37.415 CC lib/blobfs/blobfs.o 00:05:37.415 CC lib/blobfs/tree.o 00:05:37.982 LIB libspdk_blobfs.a 00:05:37.982 SO libspdk_blobfs.so.10.0 00:05:38.243 SYMLINK libspdk_blobfs.so 00:05:38.243 LIB libspdk_lvol.a 00:05:38.243 SO libspdk_lvol.so.10.0 00:05:38.243 SYMLINK libspdk_lvol.so 00:05:38.243 LIB libspdk_bdev.a 00:05:38.243 SO libspdk_bdev.so.17.0 00:05:38.502 SYMLINK libspdk_bdev.so 00:05:38.502 CC lib/nvmf/ctrlr_discovery.o 00:05:38.502 CC lib/nvmf/ctrlr.o 00:05:38.502 CC lib/nvmf/ctrlr_bdev.o 00:05:38.502 CC lib/nvmf/subsystem.o 00:05:38.502 CC lib/nvmf/nvmf.o 00:05:38.502 CC lib/nvmf/nvmf_rpc.o 00:05:38.502 CC lib/ublk/ublk.o 00:05:38.502 CC lib/ublk/ublk_rpc.o 00:05:38.502 CC lib/nvmf/transport.o 00:05:38.502 CC lib/nvmf/mdns_server.o 00:05:38.502 CC lib/nvmf/tcp.o 00:05:38.502 CC lib/nvmf/rdma.o 00:05:38.502 CC lib/nvmf/stubs.o 00:05:38.502 CC lib/nvmf/auth.o 00:05:38.502 CC lib/nvmf/vfio_user.o 00:05:38.502 CC lib/scsi/dev.o 00:05:38.502 CC lib/scsi/lun.o 00:05:38.502 CC lib/scsi/port.o 00:05:38.502 CC lib/scsi/scsi.o 00:05:38.502 CC lib/ftl/ftl_core.o 00:05:38.502 CC lib/ftl/ftl_init.o 00:05:38.503 CC lib/scsi/scsi_pr.o 00:05:38.503 CC lib/scsi/scsi_bdev.o 00:05:38.503 CC lib/ftl/ftl_debug.o 00:05:38.503 CC lib/scsi/scsi_rpc.o 00:05:38.503 CC lib/scsi/task.o 00:05:38.503 CC lib/ftl/ftl_layout.o 00:05:38.503 CC lib/nbd/nbd.o 00:05:38.503 CC lib/ftl/ftl_io.o 00:05:38.503 CC lib/nbd/nbd_rpc.o 00:05:38.503 CC lib/ftl/ftl_sb.o 00:05:38.503 CC lib/ftl/ftl_l2p.o 00:05:38.503 CC lib/ftl/ftl_nv_cache.o 00:05:38.503 CC lib/ftl/ftl_l2p_flat.o 00:05:38.503 CC lib/ftl/ftl_band_ops.o 00:05:38.503 CC lib/ftl/ftl_band.o 00:05:38.503 CC lib/ftl/ftl_reloc.o 00:05:38.503 CC lib/ftl/ftl_rq.o 00:05:38.503 CC lib/ftl/ftl_writer.o 00:05:38.503 CC lib/ftl/ftl_l2p_cache.o 00:05:38.503 CC lib/ftl/ftl_p2l.o 00:05:38.503 CC lib/ftl/ftl_p2l_log.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:38.503 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:38.503 CC lib/ftl/utils/ftl_conf.o 00:05:38.503 CC lib/ftl/utils/ftl_md.o 00:05:38.503 CC lib/ftl/utils/ftl_mempool.o 00:05:38.503 CC lib/ftl/utils/ftl_bitmap.o 00:05:38.503 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:38.503 CC lib/ftl/utils/ftl_property.o 00:05:38.503 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:38.503 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:38.503 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:38.503 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:38.503 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:38.503 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:38.503 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:38.503 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:38.503 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:38.503 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:38.503 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:38.503 CC lib/ftl/base/ftl_base_dev.o 00:05:38.503 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:38.503 CC lib/ftl/ftl_trace.o 00:05:38.503 CC lib/ftl/base/ftl_base_bdev.o 00:05:39.069 LIB libspdk_nbd.a 00:05:39.069 LIB libspdk_scsi.a 00:05:39.069 SO libspdk_nbd.so.7.0 00:05:39.069 SO libspdk_scsi.so.9.0 00:05:39.069 SYMLINK libspdk_nbd.so 00:05:39.069 SYMLINK libspdk_scsi.so 00:05:39.327 LIB libspdk_ublk.a 00:05:39.327 SO libspdk_ublk.so.3.0 00:05:39.327 CC lib/iscsi/conn.o 00:05:39.327 CC lib/vhost/vhost.o 00:05:39.327 CC lib/iscsi/init_grp.o 00:05:39.327 CC lib/vhost/vhost_rpc.o 00:05:39.327 CC lib/vhost/vhost_scsi.o 00:05:39.327 CC lib/iscsi/iscsi.o 00:05:39.327 CC lib/vhost/vhost_blk.o 00:05:39.327 CC lib/iscsi/portal_grp.o 00:05:39.327 CC lib/iscsi/param.o 00:05:39.327 CC lib/iscsi/tgt_node.o 00:05:39.327 CC lib/vhost/rte_vhost_user.o 00:05:39.327 CC lib/iscsi/iscsi_subsystem.o 00:05:39.327 CC lib/iscsi/iscsi_rpc.o 00:05:39.327 CC lib/iscsi/task.o 00:05:39.327 SYMLINK libspdk_ublk.so 00:05:39.327 LIB libspdk_ftl.a 00:05:39.586 SO libspdk_ftl.so.9.0 00:05:39.845 SYMLINK libspdk_ftl.so 00:05:40.105 LIB libspdk_nvmf.a 00:05:40.105 SO libspdk_nvmf.so.20.0 00:05:40.105 LIB libspdk_iscsi.a 00:05:40.105 LIB libspdk_vhost.a 00:05:40.105 SO libspdk_iscsi.so.8.0 00:05:40.105 SO libspdk_vhost.so.8.0 00:05:40.105 SYMLINK libspdk_nvmf.so 00:05:40.105 SYMLINK libspdk_vhost.so 00:05:40.363 SYMLINK libspdk_iscsi.so 00:05:40.623 CC module/vfu_device/vfu_virtio.o 00:05:40.623 CC module/vfu_device/vfu_virtio_blk.o 00:05:40.623 CC module/vfu_device/vfu_virtio_scsi.o 00:05:40.623 CC module/vfu_device/vfu_virtio_fs.o 00:05:40.623 CC module/vfu_device/vfu_virtio_rpc.o 00:05:40.623 CC module/env_dpdk/env_dpdk_rpc.o 00:05:40.623 CC module/scheduler/gscheduler/gscheduler.o 00:05:40.623 CC module/accel/dsa/accel_dsa.o 00:05:40.623 CC module/accel/dsa/accel_dsa_rpc.o 00:05:40.623 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:40.623 CC module/sock/posix/posix.o 00:05:40.623 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:40.623 CC module/accel/iaa/accel_iaa.o 00:05:40.623 CC module/accel/iaa/accel_iaa_rpc.o 00:05:40.623 CC module/keyring/file/keyring_rpc.o 00:05:40.623 CC module/keyring/file/keyring.o 00:05:40.623 CC module/accel/ioat/accel_ioat_rpc.o 00:05:40.623 CC module/accel/error/accel_error.o 00:05:40.623 CC module/accel/ioat/accel_ioat.o 00:05:40.623 CC module/accel/error/accel_error_rpc.o 00:05:40.623 CC module/fsdev/aio/fsdev_aio.o 00:05:40.623 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:40.623 CC module/blob/bdev/blob_bdev.o 00:05:40.623 CC module/keyring/linux/keyring.o 00:05:40.623 CC module/fsdev/aio/linux_aio_mgr.o 00:05:40.623 CC module/keyring/linux/keyring_rpc.o 00:05:40.623 LIB libspdk_env_dpdk_rpc.a 00:05:40.623 SO libspdk_env_dpdk_rpc.so.6.0 00:05:40.623 LIB libspdk_scheduler_dpdk_governor.a 00:05:40.623 LIB libspdk_keyring_linux.a 00:05:40.623 SYMLINK libspdk_env_dpdk_rpc.so 00:05:40.623 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:40.623 SO libspdk_keyring_linux.so.1.0 00:05:40.623 LIB libspdk_keyring_file.a 00:05:40.623 LIB libspdk_scheduler_gscheduler.a 00:05:40.623 SO libspdk_scheduler_gscheduler.so.4.0 00:05:40.623 SO libspdk_keyring_file.so.2.0 00:05:40.623 LIB libspdk_accel_error.a 00:05:40.882 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:40.882 SO libspdk_accel_error.so.2.0 00:05:40.882 SYMLINK libspdk_keyring_linux.so 00:05:40.882 LIB libspdk_accel_ioat.a 00:05:40.882 LIB libspdk_scheduler_dynamic.a 00:05:40.882 SYMLINK libspdk_scheduler_gscheduler.so 00:05:40.882 LIB libspdk_accel_dsa.a 00:05:40.882 LIB libspdk_accel_iaa.a 00:05:40.882 SO libspdk_accel_ioat.so.6.0 00:05:40.882 SYMLINK libspdk_keyring_file.so 00:05:40.882 SO libspdk_scheduler_dynamic.so.4.0 00:05:40.882 SO libspdk_accel_iaa.so.3.0 00:05:40.882 SYMLINK libspdk_accel_error.so 00:05:40.882 SO libspdk_accel_dsa.so.5.0 00:05:40.882 SYMLINK libspdk_scheduler_dynamic.so 00:05:40.883 SYMLINK libspdk_accel_ioat.so 00:05:40.883 SYMLINK libspdk_accel_iaa.so 00:05:40.883 SYMLINK libspdk_accel_dsa.so 00:05:40.883 LIB libspdk_blob_bdev.a 00:05:40.883 SO libspdk_blob_bdev.so.11.0 00:05:40.883 SYMLINK libspdk_blob_bdev.so 00:05:40.883 LIB libspdk_vfu_device.a 00:05:41.142 LIB libspdk_sock_posix.a 00:05:41.142 SO libspdk_sock_posix.so.6.0 00:05:41.142 SO libspdk_vfu_device.so.3.0 00:05:41.142 SYMLINK libspdk_sock_posix.so 00:05:41.142 SYMLINK libspdk_vfu_device.so 00:05:41.142 LIB libspdk_fsdev_aio.a 00:05:41.142 SO libspdk_fsdev_aio.so.1.0 00:05:41.142 CC module/bdev/delay/vbdev_delay.o 00:05:41.142 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:41.142 CC module/bdev/raid/bdev_raid.o 00:05:41.142 CC module/bdev/null/bdev_null_rpc.o 00:05:41.142 CC module/bdev/raid/bdev_raid_rpc.o 00:05:41.142 CC module/bdev/raid/bdev_raid_sb.o 00:05:41.142 CC module/bdev/null/bdev_null.o 00:05:41.142 CC module/bdev/raid/raid0.o 00:05:41.142 CC module/bdev/aio/bdev_aio.o 00:05:41.142 CC module/bdev/raid/raid1.o 00:05:41.142 CC module/blobfs/bdev/blobfs_bdev.o 00:05:41.142 CC module/bdev/raid/concat.o 00:05:41.142 CC module/bdev/aio/bdev_aio_rpc.o 00:05:41.142 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:41.142 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:41.142 CC module/bdev/malloc/bdev_malloc.o 00:05:41.142 CC module/bdev/gpt/gpt.o 00:05:41.142 CC module/bdev/split/vbdev_split.o 00:05:41.142 CC module/bdev/gpt/vbdev_gpt.o 00:05:41.142 CC module/bdev/iscsi/bdev_iscsi.o 00:05:41.142 CC module/bdev/error/vbdev_error.o 00:05:41.142 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:41.142 CC module/bdev/split/vbdev_split_rpc.o 00:05:41.142 CC module/bdev/ftl/bdev_ftl.o 00:05:41.142 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:41.142 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:41.142 CC module/bdev/passthru/vbdev_passthru.o 00:05:41.142 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:41.142 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:41.142 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:41.142 CC module/bdev/error/vbdev_error_rpc.o 00:05:41.142 CC module/bdev/nvme/bdev_nvme.o 00:05:41.142 CC module/bdev/nvme/nvme_rpc.o 00:05:41.142 CC module/bdev/lvol/vbdev_lvol.o 00:05:41.142 CC module/bdev/nvme/bdev_mdns_client.o 00:05:41.142 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:41.142 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:41.142 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:41.142 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:41.142 CC module/bdev/nvme/vbdev_opal.o 00:05:41.142 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:41.142 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:41.142 SYMLINK libspdk_fsdev_aio.so 00:05:41.403 LIB libspdk_blobfs_bdev.a 00:05:41.403 LIB libspdk_bdev_null.a 00:05:41.403 SO libspdk_blobfs_bdev.so.6.0 00:05:41.403 SO libspdk_bdev_null.so.6.0 00:05:41.403 LIB libspdk_bdev_ftl.a 00:05:41.403 LIB libspdk_bdev_split.a 00:05:41.403 SO libspdk_bdev_ftl.so.6.0 00:05:41.403 SO libspdk_bdev_split.so.6.0 00:05:41.403 SYMLINK libspdk_blobfs_bdev.so 00:05:41.403 SYMLINK libspdk_bdev_null.so 00:05:41.403 LIB libspdk_bdev_error.a 00:05:41.403 LIB libspdk_bdev_malloc.a 00:05:41.403 LIB libspdk_bdev_gpt.a 00:05:41.403 SO libspdk_bdev_malloc.so.6.0 00:05:41.403 SO libspdk_bdev_error.so.6.0 00:05:41.403 SYMLINK libspdk_bdev_ftl.so 00:05:41.404 LIB libspdk_bdev_passthru.a 00:05:41.404 SO libspdk_bdev_gpt.so.6.0 00:05:41.404 SYMLINK libspdk_bdev_split.so 00:05:41.404 SO libspdk_bdev_passthru.so.6.0 00:05:41.404 LIB libspdk_bdev_aio.a 00:05:41.404 LIB libspdk_bdev_zone_block.a 00:05:41.404 SYMLINK libspdk_bdev_error.so 00:05:41.404 SYMLINK libspdk_bdev_malloc.so 00:05:41.404 SO libspdk_bdev_aio.so.6.0 00:05:41.404 LIB libspdk_bdev_delay.a 00:05:41.404 LIB libspdk_bdev_virtio.a 00:05:41.404 SYMLINK libspdk_bdev_gpt.so 00:05:41.404 LIB libspdk_bdev_iscsi.a 00:05:41.404 SO libspdk_bdev_zone_block.so.6.0 00:05:41.404 SO libspdk_bdev_delay.so.6.0 00:05:41.404 SYMLINK libspdk_bdev_passthru.so 00:05:41.404 SO libspdk_bdev_virtio.so.6.0 00:05:41.663 SO libspdk_bdev_iscsi.so.6.0 00:05:41.663 SYMLINK libspdk_bdev_aio.so 00:05:41.663 SYMLINK libspdk_bdev_delay.so 00:05:41.663 SYMLINK libspdk_bdev_zone_block.so 00:05:41.663 SYMLINK libspdk_bdev_virtio.so 00:05:41.663 SYMLINK libspdk_bdev_iscsi.so 00:05:41.663 LIB libspdk_bdev_lvol.a 00:05:41.663 SO libspdk_bdev_lvol.so.6.0 00:05:41.663 SYMLINK libspdk_bdev_lvol.so 00:05:41.663 LIB libspdk_bdev_raid.a 00:05:41.922 SO libspdk_bdev_raid.so.6.0 00:05:41.922 SYMLINK libspdk_bdev_raid.so 00:05:43.301 LIB libspdk_bdev_nvme.a 00:05:43.301 SO libspdk_bdev_nvme.so.7.1 00:05:43.301 SYMLINK libspdk_bdev_nvme.so 00:05:43.562 CC module/event/subsystems/iobuf/iobuf.o 00:05:43.562 CC module/event/subsystems/scheduler/scheduler.o 00:05:43.562 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:43.562 CC module/event/subsystems/fsdev/fsdev.o 00:05:43.562 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:43.562 CC module/event/subsystems/keyring/keyring.o 00:05:43.562 CC module/event/subsystems/vmd/vmd.o 00:05:43.562 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:43.562 CC module/event/subsystems/sock/sock.o 00:05:43.562 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:43.562 LIB libspdk_event_vhost_blk.a 00:05:43.562 LIB libspdk_event_fsdev.a 00:05:43.562 LIB libspdk_event_scheduler.a 00:05:43.822 LIB libspdk_event_vfu_tgt.a 00:05:43.822 SO libspdk_event_vhost_blk.so.3.0 00:05:43.822 SO libspdk_event_fsdev.so.1.0 00:05:43.822 LIB libspdk_event_keyring.a 00:05:43.822 SO libspdk_event_vfu_tgt.so.3.0 00:05:43.822 SO libspdk_event_scheduler.so.4.0 00:05:43.822 LIB libspdk_event_vmd.a 00:05:43.822 LIB libspdk_event_iobuf.a 00:05:43.822 LIB libspdk_event_sock.a 00:05:43.822 SO libspdk_event_keyring.so.1.0 00:05:43.822 SO libspdk_event_iobuf.so.3.0 00:05:43.822 SO libspdk_event_vmd.so.6.0 00:05:43.822 SYMLINK libspdk_event_fsdev.so 00:05:43.822 SO libspdk_event_sock.so.5.0 00:05:43.822 SYMLINK libspdk_event_vfu_tgt.so 00:05:43.822 SYMLINK libspdk_event_vhost_blk.so 00:05:43.822 SYMLINK libspdk_event_scheduler.so 00:05:43.822 SYMLINK libspdk_event_keyring.so 00:05:43.822 SYMLINK libspdk_event_vmd.so 00:05:43.822 SYMLINK libspdk_event_sock.so 00:05:43.822 SYMLINK libspdk_event_iobuf.so 00:05:43.822 CC module/event/subsystems/accel/accel.o 00:05:44.081 LIB libspdk_event_accel.a 00:05:44.081 SO libspdk_event_accel.so.6.0 00:05:44.081 SYMLINK libspdk_event_accel.so 00:05:44.339 CC module/event/subsystems/bdev/bdev.o 00:05:44.339 LIB libspdk_event_bdev.a 00:05:44.339 SO libspdk_event_bdev.so.6.0 00:05:44.339 SYMLINK libspdk_event_bdev.so 00:05:44.599 CC module/event/subsystems/ublk/ublk.o 00:05:44.599 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:44.599 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:44.599 CC module/event/subsystems/scsi/scsi.o 00:05:44.599 CC module/event/subsystems/nbd/nbd.o 00:05:44.859 LIB libspdk_event_ublk.a 00:05:44.859 LIB libspdk_event_nbd.a 00:05:44.859 LIB libspdk_event_scsi.a 00:05:44.859 SO libspdk_event_ublk.so.3.0 00:05:44.859 SO libspdk_event_nbd.so.6.0 00:05:44.859 SO libspdk_event_scsi.so.6.0 00:05:44.859 SYMLINK libspdk_event_ublk.so 00:05:44.859 SYMLINK libspdk_event_nbd.so 00:05:44.859 LIB libspdk_event_nvmf.a 00:05:44.859 SYMLINK libspdk_event_scsi.so 00:05:44.859 SO libspdk_event_nvmf.so.6.0 00:05:44.859 SYMLINK libspdk_event_nvmf.so 00:05:45.119 CC module/event/subsystems/iscsi/iscsi.o 00:05:45.120 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:45.120 LIB libspdk_event_iscsi.a 00:05:45.120 LIB libspdk_event_vhost_scsi.a 00:05:45.120 SO libspdk_event_iscsi.so.6.0 00:05:45.120 SO libspdk_event_vhost_scsi.so.3.0 00:05:45.120 SYMLINK libspdk_event_iscsi.so 00:05:45.120 SYMLINK libspdk_event_vhost_scsi.so 00:05:45.378 SO libspdk.so.6.0 00:05:45.378 SYMLINK libspdk.so 00:05:45.378 CXX app/trace/trace.o 00:05:45.378 CC app/trace_record/trace_record.o 00:05:45.378 CC app/spdk_nvme_perf/perf.o 00:05:45.378 CC app/spdk_top/spdk_top.o 00:05:45.378 CC app/spdk_nvme_identify/identify.o 00:05:45.378 CC app/spdk_lspci/spdk_lspci.o 00:05:45.378 CC test/rpc_client/rpc_client_test.o 00:05:45.378 TEST_HEADER include/spdk/accel.h 00:05:45.378 TEST_HEADER include/spdk/accel_module.h 00:05:45.378 TEST_HEADER include/spdk/barrier.h 00:05:45.378 TEST_HEADER include/spdk/base64.h 00:05:45.378 TEST_HEADER include/spdk/assert.h 00:05:45.378 CC app/spdk_nvme_discover/discovery_aer.o 00:05:45.378 TEST_HEADER include/spdk/bdev.h 00:05:45.378 TEST_HEADER include/spdk/bdev_module.h 00:05:45.378 TEST_HEADER include/spdk/bdev_zone.h 00:05:45.378 TEST_HEADER include/spdk/bit_array.h 00:05:45.378 TEST_HEADER include/spdk/bit_pool.h 00:05:45.378 TEST_HEADER include/spdk/blob_bdev.h 00:05:45.378 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:45.378 TEST_HEADER include/spdk/blobfs.h 00:05:45.378 TEST_HEADER include/spdk/blob.h 00:05:45.378 TEST_HEADER include/spdk/conf.h 00:05:45.378 TEST_HEADER include/spdk/config.h 00:05:45.378 TEST_HEADER include/spdk/cpuset.h 00:05:45.378 TEST_HEADER include/spdk/crc32.h 00:05:45.378 TEST_HEADER include/spdk/crc16.h 00:05:45.378 TEST_HEADER include/spdk/dif.h 00:05:45.378 TEST_HEADER include/spdk/crc64.h 00:05:45.378 TEST_HEADER include/spdk/dma.h 00:05:45.378 TEST_HEADER include/spdk/env_dpdk.h 00:05:45.378 TEST_HEADER include/spdk/endian.h 00:05:45.378 TEST_HEADER include/spdk/env.h 00:05:45.378 TEST_HEADER include/spdk/event.h 00:05:45.378 TEST_HEADER include/spdk/fd_group.h 00:05:45.378 TEST_HEADER include/spdk/fd.h 00:05:45.378 TEST_HEADER include/spdk/file.h 00:05:45.378 TEST_HEADER include/spdk/fsdev.h 00:05:45.378 TEST_HEADER include/spdk/fsdev_module.h 00:05:45.378 TEST_HEADER include/spdk/ftl.h 00:05:45.379 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:45.379 TEST_HEADER include/spdk/gpt_spec.h 00:05:45.379 TEST_HEADER include/spdk/hexlify.h 00:05:45.379 CC app/spdk_dd/spdk_dd.o 00:05:45.379 TEST_HEADER include/spdk/histogram_data.h 00:05:45.379 TEST_HEADER include/spdk/idxd.h 00:05:45.379 TEST_HEADER include/spdk/idxd_spec.h 00:05:45.379 TEST_HEADER include/spdk/ioat.h 00:05:45.379 TEST_HEADER include/spdk/init.h 00:05:45.379 TEST_HEADER include/spdk/ioat_spec.h 00:05:45.379 TEST_HEADER include/spdk/iscsi_spec.h 00:05:45.379 TEST_HEADER include/spdk/json.h 00:05:45.379 TEST_HEADER include/spdk/jsonrpc.h 00:05:45.379 TEST_HEADER include/spdk/keyring.h 00:05:45.379 TEST_HEADER include/spdk/keyring_module.h 00:05:45.379 TEST_HEADER include/spdk/likely.h 00:05:45.379 TEST_HEADER include/spdk/log.h 00:05:45.379 CC app/iscsi_tgt/iscsi_tgt.o 00:05:45.379 TEST_HEADER include/spdk/lvol.h 00:05:45.379 TEST_HEADER include/spdk/md5.h 00:05:45.379 TEST_HEADER include/spdk/mmio.h 00:05:45.379 TEST_HEADER include/spdk/nbd.h 00:05:45.379 TEST_HEADER include/spdk/net.h 00:05:45.379 TEST_HEADER include/spdk/notify.h 00:05:45.379 TEST_HEADER include/spdk/memory.h 00:05:45.379 TEST_HEADER include/spdk/nvme.h 00:05:45.379 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:45.379 TEST_HEADER include/spdk/nvme_intel.h 00:05:45.379 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:45.379 TEST_HEADER include/spdk/nvme_spec.h 00:05:45.379 TEST_HEADER include/spdk/nvme_zns.h 00:05:45.379 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:45.639 TEST_HEADER include/spdk/nvmf.h 00:05:45.639 TEST_HEADER include/spdk/nvmf_spec.h 00:05:45.639 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:45.639 TEST_HEADER include/spdk/nvmf_transport.h 00:05:45.639 TEST_HEADER include/spdk/opal.h 00:05:45.639 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:45.639 TEST_HEADER include/spdk/pci_ids.h 00:05:45.639 TEST_HEADER include/spdk/opal_spec.h 00:05:45.639 TEST_HEADER include/spdk/queue.h 00:05:45.639 TEST_HEADER include/spdk/pipe.h 00:05:45.639 CC app/nvmf_tgt/nvmf_main.o 00:05:45.639 TEST_HEADER include/spdk/rpc.h 00:05:45.639 TEST_HEADER include/spdk/reduce.h 00:05:45.639 TEST_HEADER include/spdk/scheduler.h 00:05:45.639 TEST_HEADER include/spdk/scsi.h 00:05:45.639 TEST_HEADER include/spdk/scsi_spec.h 00:05:45.639 TEST_HEADER include/spdk/sock.h 00:05:45.639 TEST_HEADER include/spdk/stdinc.h 00:05:45.639 TEST_HEADER include/spdk/string.h 00:05:45.639 TEST_HEADER include/spdk/thread.h 00:05:45.639 TEST_HEADER include/spdk/trace.h 00:05:45.639 TEST_HEADER include/spdk/trace_parser.h 00:05:45.639 TEST_HEADER include/spdk/tree.h 00:05:45.639 TEST_HEADER include/spdk/ublk.h 00:05:45.639 TEST_HEADER include/spdk/util.h 00:05:45.639 TEST_HEADER include/spdk/uuid.h 00:05:45.639 TEST_HEADER include/spdk/version.h 00:05:45.639 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:45.639 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:45.639 TEST_HEADER include/spdk/vhost.h 00:05:45.639 TEST_HEADER include/spdk/vmd.h 00:05:45.639 TEST_HEADER include/spdk/xor.h 00:05:45.639 TEST_HEADER include/spdk/zipf.h 00:05:45.639 CXX test/cpp_headers/accel.o 00:05:45.639 CXX test/cpp_headers/accel_module.o 00:05:45.639 CXX test/cpp_headers/assert.o 00:05:45.639 CC app/spdk_tgt/spdk_tgt.o 00:05:45.639 CXX test/cpp_headers/barrier.o 00:05:45.639 CXX test/cpp_headers/base64.o 00:05:45.639 CXX test/cpp_headers/bdev.o 00:05:45.639 CXX test/cpp_headers/bdev_module.o 00:05:45.639 CXX test/cpp_headers/bdev_zone.o 00:05:45.639 CXX test/cpp_headers/bit_array.o 00:05:45.639 CXX test/cpp_headers/bit_pool.o 00:05:45.639 CXX test/cpp_headers/blob_bdev.o 00:05:45.639 CXX test/cpp_headers/blobfs_bdev.o 00:05:45.639 CXX test/cpp_headers/blobfs.o 00:05:45.639 CXX test/cpp_headers/blob.o 00:05:45.639 CXX test/cpp_headers/conf.o 00:05:45.639 CXX test/cpp_headers/config.o 00:05:45.639 CXX test/cpp_headers/cpuset.o 00:05:45.639 CXX test/cpp_headers/crc16.o 00:05:45.639 CXX test/cpp_headers/crc32.o 00:05:45.639 CXX test/cpp_headers/crc64.o 00:05:45.639 CXX test/cpp_headers/dif.o 00:05:45.639 CXX test/cpp_headers/dma.o 00:05:45.639 CXX test/cpp_headers/endian.o 00:05:45.639 CXX test/cpp_headers/env_dpdk.o 00:05:45.639 CXX test/cpp_headers/env.o 00:05:45.639 CXX test/cpp_headers/event.o 00:05:45.639 CXX test/cpp_headers/fd_group.o 00:05:45.639 CXX test/cpp_headers/fd.o 00:05:45.639 CXX test/cpp_headers/file.o 00:05:45.639 CXX test/cpp_headers/fsdev.o 00:05:45.639 CXX test/cpp_headers/fsdev_module.o 00:05:45.639 CXX test/cpp_headers/ftl.o 00:05:45.639 CXX test/cpp_headers/fuse_dispatcher.o 00:05:45.639 CXX test/cpp_headers/gpt_spec.o 00:05:45.639 CXX test/cpp_headers/histogram_data.o 00:05:45.639 CXX test/cpp_headers/hexlify.o 00:05:45.639 CXX test/cpp_headers/idxd.o 00:05:45.639 CXX test/cpp_headers/init.o 00:05:45.639 CXX test/cpp_headers/idxd_spec.o 00:05:45.639 CC examples/util/zipf/zipf.o 00:05:45.639 CXX test/cpp_headers/ioat_spec.o 00:05:45.639 CXX test/cpp_headers/ioat.o 00:05:45.639 CXX test/cpp_headers/iscsi_spec.o 00:05:45.639 CXX test/cpp_headers/json.o 00:05:45.639 CXX test/cpp_headers/keyring.o 00:05:45.639 CXX test/cpp_headers/jsonrpc.o 00:05:45.639 CXX test/cpp_headers/likely.o 00:05:45.639 CC test/app/jsoncat/jsoncat.o 00:05:45.639 CC test/app/histogram_perf/histogram_perf.o 00:05:45.639 CC test/app/stub/stub.o 00:05:45.639 CXX test/cpp_headers/log.o 00:05:45.639 CXX test/cpp_headers/keyring_module.o 00:05:45.639 CC test/thread/poller_perf/poller_perf.o 00:05:45.639 CXX test/cpp_headers/lvol.o 00:05:45.639 CXX test/cpp_headers/md5.o 00:05:45.639 CXX test/cpp_headers/memory.o 00:05:45.639 CXX test/cpp_headers/notify.o 00:05:45.639 CXX test/cpp_headers/mmio.o 00:05:45.639 CXX test/cpp_headers/nbd.o 00:05:45.639 CC test/env/memory/memory_ut.o 00:05:45.639 CXX test/cpp_headers/nvme.o 00:05:45.639 CXX test/cpp_headers/net.o 00:05:45.639 CXX test/cpp_headers/nvme_ocssd.o 00:05:45.639 CXX test/cpp_headers/nvme_intel.o 00:05:45.639 CC test/env/vtophys/vtophys.o 00:05:45.639 CC examples/ioat/verify/verify.o 00:05:45.639 CXX test/cpp_headers/nvme_spec.o 00:05:45.639 CXX test/cpp_headers/nvmf_cmd.o 00:05:45.639 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:45.639 CXX test/cpp_headers/nvme_zns.o 00:05:45.639 CXX test/cpp_headers/nvmf_spec.o 00:05:45.639 CXX test/cpp_headers/nvmf.o 00:05:45.639 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:45.639 CC examples/ioat/perf/perf.o 00:05:45.639 CXX test/cpp_headers/nvmf_transport.o 00:05:45.639 CXX test/cpp_headers/opal.o 00:05:45.639 CXX test/cpp_headers/opal_spec.o 00:05:45.639 CXX test/cpp_headers/pci_ids.o 00:05:45.639 CXX test/cpp_headers/pipe.o 00:05:45.639 CXX test/cpp_headers/queue.o 00:05:45.639 CXX test/cpp_headers/rpc.o 00:05:45.639 CXX test/cpp_headers/reduce.o 00:05:45.639 CXX test/cpp_headers/scheduler.o 00:05:45.639 CXX test/cpp_headers/scsi.o 00:05:45.639 CC test/env/pci/pci_ut.o 00:05:45.639 CXX test/cpp_headers/scsi_spec.o 00:05:45.639 CXX test/cpp_headers/sock.o 00:05:45.639 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:45.639 CXX test/cpp_headers/stdinc.o 00:05:45.639 CXX test/cpp_headers/trace_parser.o 00:05:45.639 CXX test/cpp_headers/string.o 00:05:45.639 CXX test/cpp_headers/thread.o 00:05:45.639 CXX test/cpp_headers/trace.o 00:05:45.639 CXX test/cpp_headers/tree.o 00:05:45.639 CXX test/cpp_headers/ublk.o 00:05:45.639 CXX test/cpp_headers/uuid.o 00:05:45.639 CXX test/cpp_headers/util.o 00:05:45.639 CXX test/cpp_headers/version.o 00:05:45.639 CXX test/cpp_headers/vhost.o 00:05:45.639 CXX test/cpp_headers/vfio_user_spec.o 00:05:45.639 CXX test/cpp_headers/vfio_user_pci.o 00:05:45.639 CXX test/cpp_headers/xor.o 00:05:45.639 CXX test/cpp_headers/vmd.o 00:05:45.639 CC test/app/bdev_svc/bdev_svc.o 00:05:45.639 CXX test/cpp_headers/zipf.o 00:05:45.639 CC test/dma/test_dma/test_dma.o 00:05:45.639 CC app/fio/nvme/fio_plugin.o 00:05:45.639 CC app/fio/bdev/fio_plugin.o 00:05:45.898 LINK spdk_lspci 00:05:45.898 LINK rpc_client_test 00:05:45.898 LINK nvmf_tgt 00:05:45.898 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:45.898 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:45.898 CC test/env/mem_callbacks/mem_callbacks.o 00:05:45.898 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:45.898 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:45.898 LINK interrupt_tgt 00:05:45.898 LINK vtophys 00:05:45.898 LINK spdk_nvme_discover 00:05:45.898 LINK spdk_trace_record 00:05:46.156 LINK spdk_dd 00:05:46.156 LINK iscsi_tgt 00:05:46.156 LINK poller_perf 00:05:46.156 LINK spdk_tgt 00:05:46.156 LINK zipf 00:05:46.156 LINK jsoncat 00:05:46.156 LINK histogram_perf 00:05:46.156 LINK spdk_trace 00:05:46.156 LINK stub 00:05:46.156 LINK env_dpdk_post_init 00:05:46.156 LINK bdev_svc 00:05:46.156 LINK verify 00:05:46.416 LINK ioat_perf 00:05:46.416 CC examples/vmd/lsvmd/lsvmd.o 00:05:46.416 CC examples/sock/hello_world/hello_sock.o 00:05:46.416 CC test/event/event_perf/event_perf.o 00:05:46.416 CC test/event/reactor_perf/reactor_perf.o 00:05:46.416 CC test/event/reactor/reactor.o 00:05:46.416 CC examples/vmd/led/led.o 00:05:46.416 CC examples/idxd/perf/perf.o 00:05:46.416 CC test/event/app_repeat/app_repeat.o 00:05:46.416 CC test/event/scheduler/scheduler.o 00:05:46.416 CC app/vhost/vhost.o 00:05:46.416 CC examples/thread/thread/thread_ex.o 00:05:46.416 LINK test_dma 00:05:46.416 LINK spdk_bdev 00:05:46.416 LINK pci_ut 00:05:46.416 LINK nvme_fuzz 00:05:46.416 LINK vhost_fuzz 00:05:46.416 LINK spdk_nvme 00:05:46.416 LINK lsvmd 00:05:46.416 LINK reactor_perf 00:05:46.416 LINK reactor 00:05:46.416 LINK event_perf 00:05:46.674 LINK mem_callbacks 00:05:46.674 LINK led 00:05:46.674 LINK spdk_top 00:05:46.674 LINK app_repeat 00:05:46.674 LINK hello_sock 00:05:46.674 LINK scheduler 00:05:46.674 LINK vhost 00:05:46.674 LINK idxd_perf 00:05:46.674 LINK spdk_nvme_perf 00:05:46.674 LINK spdk_nvme_identify 00:05:46.674 LINK thread 00:05:46.674 CC test/nvme/aer/aer.o 00:05:46.674 CC test/nvme/sgl/sgl.o 00:05:46.674 CC test/nvme/connect_stress/connect_stress.o 00:05:46.674 CC test/nvme/e2edp/nvme_dp.o 00:05:46.674 CC test/nvme/overhead/overhead.o 00:05:46.674 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:46.674 CC test/nvme/startup/startup.o 00:05:46.674 CC test/nvme/compliance/nvme_compliance.o 00:05:46.674 CC test/nvme/reset/reset.o 00:05:46.674 CC test/nvme/simple_copy/simple_copy.o 00:05:46.674 CC test/nvme/cuse/cuse.o 00:05:46.674 CC test/nvme/err_injection/err_injection.o 00:05:46.933 CC test/nvme/fused_ordering/fused_ordering.o 00:05:46.933 CC test/nvme/boot_partition/boot_partition.o 00:05:46.933 CC test/nvme/fdp/fdp.o 00:05:46.933 CC test/nvme/reserve/reserve.o 00:05:46.933 CC test/accel/dif/dif.o 00:05:46.933 CC test/blobfs/mkfs/mkfs.o 00:05:46.933 CC test/lvol/esnap/esnap.o 00:05:46.933 CC examples/nvme/hello_world/hello_world.o 00:05:46.933 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:46.933 CC examples/nvme/abort/abort.o 00:05:46.933 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:46.933 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:46.933 CC examples/nvme/arbitration/arbitration.o 00:05:46.933 CC examples/nvme/hotplug/hotplug.o 00:05:46.933 CC examples/nvme/reconnect/reconnect.o 00:05:46.933 LINK memory_ut 00:05:46.933 LINK startup 00:05:46.933 LINK doorbell_aers 00:05:46.933 LINK connect_stress 00:05:46.933 LINK boot_partition 00:05:46.933 LINK simple_copy 00:05:46.933 LINK reserve 00:05:46.933 CC examples/accel/perf/accel_perf.o 00:05:46.933 LINK mkfs 00:05:46.933 LINK err_injection 00:05:46.933 LINK fused_ordering 00:05:46.933 LINK pmr_persistence 00:05:46.933 CC examples/blob/cli/blobcli.o 00:05:46.933 LINK nvme_compliance 00:05:46.933 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:46.933 LINK cmb_copy 00:05:46.933 CC examples/blob/hello_world/hello_blob.o 00:05:46.933 LINK hello_world 00:05:46.933 LINK fdp 00:05:46.933 LINK aer 00:05:46.933 LINK sgl 00:05:46.933 LINK nvme_dp 00:05:46.933 LINK reset 00:05:47.193 LINK overhead 00:05:47.193 LINK hotplug 00:05:47.193 LINK abort 00:05:47.193 LINK arbitration 00:05:47.193 LINK reconnect 00:05:47.193 LINK hello_blob 00:05:47.193 LINK hello_fsdev 00:05:47.193 LINK dif 00:05:47.193 LINK nvme_manage 00:05:47.193 LINK accel_perf 00:05:47.193 LINK blobcli 00:05:47.454 LINK iscsi_fuzz 00:05:47.454 CC test/bdev/bdevio/bdevio.o 00:05:47.715 CC examples/bdev/hello_world/hello_bdev.o 00:05:47.715 CC examples/bdev/bdevperf/bdevperf.o 00:05:47.715 LINK hello_bdev 00:05:47.715 LINK cuse 00:05:47.974 LINK bdevio 00:05:47.974 LINK bdevperf 00:05:48.541 CC examples/nvmf/nvmf/nvmf.o 00:05:48.541 LINK nvmf 00:05:49.920 LINK esnap 00:05:50.179 00:05:50.179 real 0m43.122s 00:05:50.179 user 6m23.198s 00:05:50.179 sys 3m24.887s 00:05:50.179 14:25:57 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:50.179 14:25:57 make -- common/autotest_common.sh@10 -- $ set +x 00:05:50.179 ************************************ 00:05:50.179 END TEST make 00:05:50.179 ************************************ 00:05:50.179 14:25:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:50.179 14:25:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:50.179 14:25:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:50.179 14:25:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:50.179 14:25:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:50.179 14:25:57 -- pm/common@44 -- $ pid=3575529 00:05:50.179 14:25:57 -- pm/common@50 -- $ kill -TERM 3575529 00:05:50.180 14:25:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:50.180 14:25:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:50.180 14:25:57 -- pm/common@44 -- $ pid=3575530 00:05:50.180 14:25:57 -- pm/common@50 -- $ kill -TERM 3575530 00:05:50.180 14:25:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:50.180 14:25:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:50.180 14:25:57 -- pm/common@44 -- $ pid=3575531 00:05:50.180 14:25:57 -- pm/common@50 -- $ kill -TERM 3575531 00:05:50.180 14:25:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:50.180 14:25:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:50.180 14:25:57 -- pm/common@44 -- $ pid=3575558 00:05:50.180 14:25:57 -- pm/common@50 -- $ sudo -E kill -TERM 3575558 00:05:50.180 14:25:57 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:50.180 14:25:57 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:50.180 14:25:57 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.180 14:25:57 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.180 14:25:57 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.180 14:25:57 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.180 14:25:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.180 14:25:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.180 14:25:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.180 14:25:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.180 14:25:57 -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.180 14:25:57 -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.180 14:25:57 -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.180 14:25:57 -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.180 14:25:57 -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.180 14:25:57 -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.180 14:25:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.180 14:25:57 -- scripts/common.sh@344 -- # case "$op" in 00:05:50.180 14:25:57 -- scripts/common.sh@345 -- # : 1 00:05:50.180 14:25:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.180 14:25:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.180 14:25:57 -- scripts/common.sh@365 -- # decimal 1 00:05:50.180 14:25:57 -- scripts/common.sh@353 -- # local d=1 00:05:50.180 14:25:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.180 14:25:57 -- scripts/common.sh@355 -- # echo 1 00:05:50.180 14:25:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.180 14:25:57 -- scripts/common.sh@366 -- # decimal 2 00:05:50.180 14:25:57 -- scripts/common.sh@353 -- # local d=2 00:05:50.180 14:25:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.180 14:25:57 -- scripts/common.sh@355 -- # echo 2 00:05:50.180 14:25:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.180 14:25:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.180 14:25:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.180 14:25:57 -- scripts/common.sh@368 -- # return 0 00:05:50.180 14:25:57 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.180 14:25:57 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.180 --rc genhtml_branch_coverage=1 00:05:50.180 --rc genhtml_function_coverage=1 00:05:50.180 --rc genhtml_legend=1 00:05:50.180 --rc geninfo_all_blocks=1 00:05:50.180 --rc geninfo_unexecuted_blocks=1 00:05:50.180 00:05:50.180 ' 00:05:50.180 14:25:57 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.180 --rc genhtml_branch_coverage=1 00:05:50.180 --rc genhtml_function_coverage=1 00:05:50.180 --rc genhtml_legend=1 00:05:50.180 --rc geninfo_all_blocks=1 00:05:50.180 --rc geninfo_unexecuted_blocks=1 00:05:50.180 00:05:50.180 ' 00:05:50.180 14:25:57 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.180 --rc genhtml_branch_coverage=1 00:05:50.180 --rc genhtml_function_coverage=1 00:05:50.180 --rc genhtml_legend=1 00:05:50.180 --rc geninfo_all_blocks=1 00:05:50.180 --rc geninfo_unexecuted_blocks=1 00:05:50.180 00:05:50.180 ' 00:05:50.180 14:25:57 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.180 --rc genhtml_branch_coverage=1 00:05:50.180 --rc genhtml_function_coverage=1 00:05:50.180 --rc genhtml_legend=1 00:05:50.180 --rc geninfo_all_blocks=1 00:05:50.180 --rc geninfo_unexecuted_blocks=1 00:05:50.180 00:05:50.180 ' 00:05:50.180 14:25:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.180 14:25:57 -- nvmf/common.sh@7 -- # uname -s 00:05:50.180 14:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.180 14:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.180 14:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.180 14:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.180 14:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.180 14:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.180 14:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.180 14:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.180 14:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.180 14:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.180 14:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:50.180 14:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:50.180 14:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.180 14:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.180 14:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:50.180 14:25:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.180 14:25:57 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.180 14:25:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.180 14:25:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.180 14:25:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.180 14:25:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.180 14:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.180 14:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.180 14:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.180 14:25:57 -- paths/export.sh@5 -- # export PATH 00:05:50.180 14:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.180 14:25:57 -- nvmf/common.sh@51 -- # : 0 00:05:50.180 14:25:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.180 14:25:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.180 14:25:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.180 14:25:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.180 14:25:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.180 14:25:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.180 14:25:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.180 14:25:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.180 14:25:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.180 14:25:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:50.180 14:25:57 -- spdk/autotest.sh@32 -- # uname -s 00:05:50.180 14:25:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:50.180 14:25:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:50.180 14:25:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:50.180 14:25:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:50.180 14:25:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:50.180 14:25:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:50.180 14:25:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:50.180 14:25:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:50.180 14:25:57 -- spdk/autotest.sh@48 -- # udevadm_pid=3638625 00:05:50.180 14:25:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:50.180 14:25:57 -- pm/common@17 -- # local monitor 00:05:50.180 14:25:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:50.180 14:25:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:50.180 14:25:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:50.180 14:25:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:50.180 14:25:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:50.180 14:25:57 -- pm/common@25 -- # sleep 1 00:05:50.180 14:25:57 -- pm/common@21 -- # date +%s 00:05:50.180 14:25:57 -- pm/common@21 -- # date +%s 00:05:50.180 14:25:57 -- pm/common@21 -- # date +%s 00:05:50.180 14:25:57 -- pm/common@21 -- # date +%s 00:05:50.181 14:25:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732109157 00:05:50.181 14:25:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732109157 00:05:50.181 14:25:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732109157 00:05:50.181 14:25:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732109157 00:05:50.439 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732109157_collect-vmstat.pm.log 00:05:50.439 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732109157_collect-cpu-temp.pm.log 00:05:50.439 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732109157_collect-cpu-load.pm.log 00:05:50.439 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732109157_collect-bmc-pm.bmc.pm.log 00:05:51.375 14:25:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:51.375 14:25:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:51.375 14:25:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.375 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:05:51.375 14:25:58 -- spdk/autotest.sh@59 -- # create_test_list 00:05:51.375 14:25:58 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:51.375 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:05:51.375 14:25:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:51.375 14:25:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:51.375 14:25:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:51.375 14:25:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:51.375 14:25:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:51.375 14:25:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:51.375 14:25:58 -- common/autotest_common.sh@1457 -- # uname 00:05:51.375 14:25:58 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:51.375 14:25:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:51.375 14:25:58 -- common/autotest_common.sh@1477 -- # uname 00:05:51.375 14:25:58 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:51.375 14:25:58 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:51.376 14:25:58 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:51.376 lcov: LCOV version 1.15 00:05:51.376 14:25:58 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:01.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:01.358 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:11.351 14:26:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:11.351 14:26:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.351 14:26:17 -- common/autotest_common.sh@10 -- # set +x 00:06:11.351 14:26:17 -- spdk/autotest.sh@78 -- # rm -f 00:06:11.351 14:26:17 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:13.262 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:65:00.0 (144d a80a): Already using the nvme driver 00:06:13.262 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:06:13.262 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:06:13.262 14:26:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:13.262 14:26:20 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:13.262 14:26:20 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:13.262 14:26:20 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:13.262 14:26:20 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:13.262 14:26:20 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:13.262 14:26:20 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:13.262 14:26:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:13.262 14:26:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:13.262 14:26:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:13.262 14:26:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.262 14:26:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.262 14:26:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:13.262 14:26:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:13.262 14:26:20 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:13.262 No valid GPT data, bailing 00:06:13.262 14:26:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:13.262 14:26:20 -- scripts/common.sh@394 -- # pt= 00:06:13.262 14:26:20 -- scripts/common.sh@395 -- # return 1 00:06:13.262 14:26:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:13.262 1+0 records in 00:06:13.262 1+0 records out 00:06:13.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00174972 s, 599 MB/s 00:06:13.262 14:26:20 -- spdk/autotest.sh@105 -- # sync 00:06:13.262 14:26:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:13.262 14:26:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:13.262 14:26:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:18.547 14:26:25 -- spdk/autotest.sh@111 -- # uname -s 00:06:18.547 14:26:25 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:18.547 14:26:25 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:18.547 14:26:25 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:21.087 Hugepages 00:06:21.087 node hugesize free / total 00:06:21.087 node0 1048576kB 0 / 0 00:06:21.087 node0 2048kB 0 / 0 00:06:21.087 node1 1048576kB 0 / 0 00:06:21.087 node1 2048kB 0 / 0 00:06:21.087 00:06:21.087 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:21.087 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:21.087 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:21.087 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:21.087 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:21.087 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:21.087 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:21.087 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:21.087 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:21.087 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:21.087 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:21.087 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:21.087 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:21.087 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:21.087 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:21.087 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:21.088 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:21.088 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:21.088 14:26:27 -- spdk/autotest.sh@117 -- # uname -s 00:06:21.088 14:26:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:21.088 14:26:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:21.088 14:26:27 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:23.628 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:23.628 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:25.537 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:25.537 14:26:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:26.106 14:26:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:26.106 14:26:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:26.106 14:26:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:26.106 14:26:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:26.106 14:26:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:26.106 14:26:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:26.106 14:26:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:26.106 14:26:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:26.106 14:26:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:26.367 14:26:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:26.367 14:26:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:06:26.367 14:26:33 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:28.908 Waiting for block devices as requested 00:06:28.908 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:28.908 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:28.908 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:28.908 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:28.908 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:29.168 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:29.168 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:29.168 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:29.168 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:29.429 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:29.429 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:29.429 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:29.690 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:29.690 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:29.690 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:29.690 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:29.950 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:29.950 14:26:36 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:29.951 14:26:36 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:29.951 14:26:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:29.951 14:26:36 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:06:29.951 14:26:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:29.951 14:26:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:29.951 14:26:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:29.951 14:26:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:29.951 14:26:36 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:29.951 14:26:36 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:29.951 14:26:36 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:29.951 14:26:36 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:29.951 14:26:36 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:29.951 14:26:36 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:06:29.951 14:26:36 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:29.951 14:26:36 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:29.951 14:26:36 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:29.951 14:26:36 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:29.951 14:26:36 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:29.951 14:26:36 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:29.951 14:26:36 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:29.951 14:26:36 -- common/autotest_common.sh@1543 -- # continue 00:06:29.951 14:26:36 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:29.951 14:26:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.951 14:26:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.951 14:26:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:29.951 14:26:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.951 14:26:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.951 14:26:36 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:32.545 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:32.545 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:32.545 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:32.545 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:32.546 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:32.546 14:26:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:32.546 14:26:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.546 14:26:39 -- common/autotest_common.sh@10 -- # set +x 00:06:32.546 14:26:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:32.546 14:26:39 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:32.546 14:26:39 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:32.546 14:26:39 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:32.546 14:26:39 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:32.546 14:26:39 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:32.546 14:26:39 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:32.546 14:26:39 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:32.546 14:26:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:32.546 14:26:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:32.546 14:26:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:32.546 14:26:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:32.546 14:26:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:32.878 14:26:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:32.878 14:26:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:06:32.878 14:26:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:32.878 14:26:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:32.878 14:26:39 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:06:32.878 14:26:39 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:32.878 14:26:39 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:32.878 14:26:39 -- common/autotest_common.sh@1572 -- # return 0 00:06:32.878 14:26:39 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:32.878 14:26:39 -- common/autotest_common.sh@1580 -- # return 0 00:06:32.878 14:26:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:32.878 14:26:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:32.878 14:26:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:32.878 14:26:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:32.878 14:26:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:32.878 14:26:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.878 14:26:39 -- common/autotest_common.sh@10 -- # set +x 00:06:32.878 14:26:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:32.878 14:26:39 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:32.878 14:26:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.878 14:26:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.878 14:26:39 -- common/autotest_common.sh@10 -- # set +x 00:06:32.878 ************************************ 00:06:32.878 START TEST env 00:06:32.878 ************************************ 00:06:32.878 14:26:39 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:32.878 * Looking for test storage... 00:06:32.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:32.878 14:26:39 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.878 14:26:39 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.878 14:26:39 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.878 14:26:39 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.878 14:26:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.878 14:26:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.878 14:26:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.878 14:26:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.878 14:26:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.878 14:26:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.878 14:26:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.878 14:26:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.878 14:26:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.878 14:26:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.878 14:26:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.878 14:26:39 env -- scripts/common.sh@344 -- # case "$op" in 00:06:32.878 14:26:39 env -- scripts/common.sh@345 -- # : 1 00:06:32.878 14:26:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.878 14:26:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.878 14:26:39 env -- scripts/common.sh@365 -- # decimal 1 00:06:32.878 14:26:39 env -- scripts/common.sh@353 -- # local d=1 00:06:32.878 14:26:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.878 14:26:39 env -- scripts/common.sh@355 -- # echo 1 00:06:32.878 14:26:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.878 14:26:39 env -- scripts/common.sh@366 -- # decimal 2 00:06:32.878 14:26:39 env -- scripts/common.sh@353 -- # local d=2 00:06:32.878 14:26:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.878 14:26:39 env -- scripts/common.sh@355 -- # echo 2 00:06:32.878 14:26:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.878 14:26:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.878 14:26:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.879 14:26:39 env -- scripts/common.sh@368 -- # return 0 00:06:32.879 14:26:39 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.879 14:26:39 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.879 --rc genhtml_branch_coverage=1 00:06:32.879 --rc genhtml_function_coverage=1 00:06:32.879 --rc genhtml_legend=1 00:06:32.879 --rc geninfo_all_blocks=1 00:06:32.879 --rc geninfo_unexecuted_blocks=1 00:06:32.879 00:06:32.879 ' 00:06:32.879 14:26:39 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.879 --rc genhtml_branch_coverage=1 00:06:32.879 --rc genhtml_function_coverage=1 00:06:32.879 --rc genhtml_legend=1 00:06:32.879 --rc geninfo_all_blocks=1 00:06:32.879 --rc geninfo_unexecuted_blocks=1 00:06:32.879 00:06:32.879 ' 00:06:32.879 14:26:39 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.879 --rc genhtml_branch_coverage=1 00:06:32.879 --rc genhtml_function_coverage=1 00:06:32.879 --rc genhtml_legend=1 00:06:32.879 --rc geninfo_all_blocks=1 00:06:32.879 --rc geninfo_unexecuted_blocks=1 00:06:32.879 00:06:32.879 ' 00:06:32.879 14:26:39 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.879 --rc genhtml_branch_coverage=1 00:06:32.879 --rc genhtml_function_coverage=1 00:06:32.879 --rc genhtml_legend=1 00:06:32.879 --rc geninfo_all_blocks=1 00:06:32.879 --rc geninfo_unexecuted_blocks=1 00:06:32.879 00:06:32.879 ' 00:06:32.879 14:26:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:32.879 14:26:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.879 14:26:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.879 14:26:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.879 ************************************ 00:06:32.879 START TEST env_memory 00:06:32.879 ************************************ 00:06:32.879 14:26:39 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:32.879 00:06:32.879 00:06:32.879 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.879 http://cunit.sourceforge.net/ 00:06:32.879 00:06:32.879 00:06:32.879 Suite: memory 00:06:32.879 Test: alloc and free memory map ...[2024-11-20 14:26:39.811123] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:32.879 passed 00:06:32.879 Test: mem map translation ...[2024-11-20 14:26:39.836863] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:32.879 [2024-11-20 14:26:39.836896] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:32.879 [2024-11-20 14:26:39.836943] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:32.879 [2024-11-20 14:26:39.836950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:32.879 passed 00:06:32.879 Test: mem map registration ...[2024-11-20 14:26:39.892384] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:32.879 [2024-11-20 14:26:39.892412] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:32.879 passed 00:06:33.174 Test: mem map adjacent registrations ...passed 00:06:33.174 00:06:33.174 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.174 suites 1 1 n/a 0 0 00:06:33.174 tests 4 4 4 0 0 00:06:33.174 asserts 152 152 152 0 n/a 00:06:33.174 00:06:33.174 Elapsed time = 0.187 seconds 00:06:33.174 00:06:33.174 real 0m0.195s 00:06:33.174 user 0m0.182s 00:06:33.174 sys 0m0.012s 00:06:33.174 14:26:39 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.174 14:26:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:33.174 ************************************ 00:06:33.174 END TEST env_memory 00:06:33.174 ************************************ 00:06:33.174 14:26:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:33.174 14:26:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.174 14:26:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.174 14:26:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:33.174 ************************************ 00:06:33.174 START TEST env_vtophys 00:06:33.174 ************************************ 00:06:33.174 14:26:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:33.174 EAL: lib.eal log level changed from notice to debug 00:06:33.174 EAL: Detected lcore 0 as core 0 on socket 0 00:06:33.174 EAL: Detected lcore 1 as core 1 on socket 0 00:06:33.174 EAL: Detected lcore 2 as core 2 on socket 0 00:06:33.174 EAL: Detected lcore 3 as core 3 on socket 0 00:06:33.174 EAL: Detected lcore 4 as core 4 on socket 0 00:06:33.174 EAL: Detected lcore 5 as core 5 on socket 0 00:06:33.174 EAL: Detected lcore 6 as core 6 on socket 0 00:06:33.174 EAL: Detected lcore 7 as core 7 on socket 0 00:06:33.174 EAL: Detected lcore 8 as core 8 on socket 0 00:06:33.174 EAL: Detected lcore 9 as core 9 on socket 0 00:06:33.174 EAL: Detected lcore 10 as core 10 on socket 0 00:06:33.174 EAL: Detected lcore 11 as core 11 on socket 0 00:06:33.174 EAL: Detected lcore 12 as core 12 on socket 0 00:06:33.174 EAL: Detected lcore 13 as core 13 on socket 0 00:06:33.174 EAL: Detected lcore 14 as core 14 on socket 0 00:06:33.174 EAL: Detected lcore 15 as core 15 on socket 0 00:06:33.174 EAL: Detected lcore 16 as core 16 on socket 0 00:06:33.174 EAL: Detected lcore 17 as core 17 on socket 0 00:06:33.174 EAL: Detected lcore 18 as core 18 on socket 0 00:06:33.174 EAL: Detected lcore 19 as core 19 on socket 0 00:06:33.174 EAL: Detected lcore 20 as core 20 on socket 0 00:06:33.174 EAL: Detected lcore 21 as core 21 on socket 0 00:06:33.174 EAL: Detected lcore 22 as core 22 on socket 0 00:06:33.174 EAL: Detected lcore 23 as core 23 on socket 0 00:06:33.174 EAL: Detected lcore 24 as core 24 on socket 0 00:06:33.174 EAL: Detected lcore 25 as core 25 on socket 0 00:06:33.174 EAL: Detected lcore 26 as core 26 on socket 0 00:06:33.174 EAL: Detected lcore 27 as core 27 on socket 0 00:06:33.174 EAL: Detected lcore 28 as core 28 on socket 0 00:06:33.174 EAL: Detected lcore 29 as core 29 on socket 0 00:06:33.174 EAL: Detected lcore 30 as core 30 on socket 0 00:06:33.174 EAL: Detected lcore 31 as core 31 on socket 0 00:06:33.174 EAL: Detected lcore 32 as core 32 on socket 0 00:06:33.175 EAL: Detected lcore 33 as core 33 on socket 0 00:06:33.175 EAL: Detected lcore 34 as core 34 on socket 0 00:06:33.175 EAL: Detected lcore 35 as core 35 on socket 0 00:06:33.175 EAL: Detected lcore 36 as core 0 on socket 1 00:06:33.175 EAL: Detected lcore 37 as core 1 on socket 1 00:06:33.175 EAL: Detected lcore 38 as core 2 on socket 1 00:06:33.175 EAL: Detected lcore 39 as core 3 on socket 1 00:06:33.175 EAL: Detected lcore 40 as core 4 on socket 1 00:06:33.175 EAL: Detected lcore 41 as core 5 on socket 1 00:06:33.175 EAL: Detected lcore 42 as core 6 on socket 1 00:06:33.175 EAL: Detected lcore 43 as core 7 on socket 1 00:06:33.175 EAL: Detected lcore 44 as core 8 on socket 1 00:06:33.175 EAL: Detected lcore 45 as core 9 on socket 1 00:06:33.175 EAL: Detected lcore 46 as core 10 on socket 1 00:06:33.175 EAL: Detected lcore 47 as core 11 on socket 1 00:06:33.175 EAL: Detected lcore 48 as core 12 on socket 1 00:06:33.175 EAL: Detected lcore 49 as core 13 on socket 1 00:06:33.175 EAL: Detected lcore 50 as core 14 on socket 1 00:06:33.175 EAL: Detected lcore 51 as core 15 on socket 1 00:06:33.175 EAL: Detected lcore 52 as core 16 on socket 1 00:06:33.175 EAL: Detected lcore 53 as core 17 on socket 1 00:06:33.175 EAL: Detected lcore 54 as core 18 on socket 1 00:06:33.175 EAL: Detected lcore 55 as core 19 on socket 1 00:06:33.175 EAL: Detected lcore 56 as core 20 on socket 1 00:06:33.175 EAL: Detected lcore 57 as core 21 on socket 1 00:06:33.175 EAL: Detected lcore 58 as core 22 on socket 1 00:06:33.175 EAL: Detected lcore 59 as core 23 on socket 1 00:06:33.175 EAL: Detected lcore 60 as core 24 on socket 1 00:06:33.175 EAL: Detected lcore 61 as core 25 on socket 1 00:06:33.175 EAL: Detected lcore 62 as core 26 on socket 1 00:06:33.175 EAL: Detected lcore 63 as core 27 on socket 1 00:06:33.175 EAL: Detected lcore 64 as core 28 on socket 1 00:06:33.175 EAL: Detected lcore 65 as core 29 on socket 1 00:06:33.175 EAL: Detected lcore 66 as core 30 on socket 1 00:06:33.175 EAL: Detected lcore 67 as core 31 on socket 1 00:06:33.175 EAL: Detected lcore 68 as core 32 on socket 1 00:06:33.175 EAL: Detected lcore 69 as core 33 on socket 1 00:06:33.175 EAL: Detected lcore 70 as core 34 on socket 1 00:06:33.175 EAL: Detected lcore 71 as core 35 on socket 1 00:06:33.175 EAL: Detected lcore 72 as core 0 on socket 0 00:06:33.175 EAL: Detected lcore 73 as core 1 on socket 0 00:06:33.175 EAL: Detected lcore 74 as core 2 on socket 0 00:06:33.175 EAL: Detected lcore 75 as core 3 on socket 0 00:06:33.175 EAL: Detected lcore 76 as core 4 on socket 0 00:06:33.175 EAL: Detected lcore 77 as core 5 on socket 0 00:06:33.175 EAL: Detected lcore 78 as core 6 on socket 0 00:06:33.175 EAL: Detected lcore 79 as core 7 on socket 0 00:06:33.175 EAL: Detected lcore 80 as core 8 on socket 0 00:06:33.175 EAL: Detected lcore 81 as core 9 on socket 0 00:06:33.175 EAL: Detected lcore 82 as core 10 on socket 0 00:06:33.175 EAL: Detected lcore 83 as core 11 on socket 0 00:06:33.175 EAL: Detected lcore 84 as core 12 on socket 0 00:06:33.175 EAL: Detected lcore 85 as core 13 on socket 0 00:06:33.175 EAL: Detected lcore 86 as core 14 on socket 0 00:06:33.175 EAL: Detected lcore 87 as core 15 on socket 0 00:06:33.175 EAL: Detected lcore 88 as core 16 on socket 0 00:06:33.175 EAL: Detected lcore 89 as core 17 on socket 0 00:06:33.175 EAL: Detected lcore 90 as core 18 on socket 0 00:06:33.175 EAL: Detected lcore 91 as core 19 on socket 0 00:06:33.175 EAL: Detected lcore 92 as core 20 on socket 0 00:06:33.175 EAL: Detected lcore 93 as core 21 on socket 0 00:06:33.175 EAL: Detected lcore 94 as core 22 on socket 0 00:06:33.175 EAL: Detected lcore 95 as core 23 on socket 0 00:06:33.175 EAL: Detected lcore 96 as core 24 on socket 0 00:06:33.175 EAL: Detected lcore 97 as core 25 on socket 0 00:06:33.175 EAL: Detected lcore 98 as core 26 on socket 0 00:06:33.175 EAL: Detected lcore 99 as core 27 on socket 0 00:06:33.175 EAL: Detected lcore 100 as core 28 on socket 0 00:06:33.175 EAL: Detected lcore 101 as core 29 on socket 0 00:06:33.175 EAL: Detected lcore 102 as core 30 on socket 0 00:06:33.175 EAL: Detected lcore 103 as core 31 on socket 0 00:06:33.175 EAL: Detected lcore 104 as core 32 on socket 0 00:06:33.175 EAL: Detected lcore 105 as core 33 on socket 0 00:06:33.175 EAL: Detected lcore 106 as core 34 on socket 0 00:06:33.175 EAL: Detected lcore 107 as core 35 on socket 0 00:06:33.175 EAL: Detected lcore 108 as core 0 on socket 1 00:06:33.175 EAL: Detected lcore 109 as core 1 on socket 1 00:06:33.175 EAL: Detected lcore 110 as core 2 on socket 1 00:06:33.175 EAL: Detected lcore 111 as core 3 on socket 1 00:06:33.175 EAL: Detected lcore 112 as core 4 on socket 1 00:06:33.175 EAL: Detected lcore 113 as core 5 on socket 1 00:06:33.175 EAL: Detected lcore 114 as core 6 on socket 1 00:06:33.175 EAL: Detected lcore 115 as core 7 on socket 1 00:06:33.175 EAL: Detected lcore 116 as core 8 on socket 1 00:06:33.175 EAL: Detected lcore 117 as core 9 on socket 1 00:06:33.175 EAL: Detected lcore 118 as core 10 on socket 1 00:06:33.175 EAL: Detected lcore 119 as core 11 on socket 1 00:06:33.175 EAL: Detected lcore 120 as core 12 on socket 1 00:06:33.175 EAL: Detected lcore 121 as core 13 on socket 1 00:06:33.175 EAL: Detected lcore 122 as core 14 on socket 1 00:06:33.175 EAL: Detected lcore 123 as core 15 on socket 1 00:06:33.175 EAL: Detected lcore 124 as core 16 on socket 1 00:06:33.175 EAL: Detected lcore 125 as core 17 on socket 1 00:06:33.175 EAL: Detected lcore 126 as core 18 on socket 1 00:06:33.175 EAL: Detected lcore 127 as core 19 on socket 1 00:06:33.175 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:33.175 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:33.175 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:33.175 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:33.175 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:33.175 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:33.175 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:33.175 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:33.175 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:33.175 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:33.175 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:33.175 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:33.175 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:33.175 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:33.175 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:33.175 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:33.175 EAL: Maximum logical cores by configuration: 128 00:06:33.175 EAL: Detected CPU lcores: 128 00:06:33.175 EAL: Detected NUMA nodes: 2 00:06:33.175 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:33.175 EAL: Detected shared linkage of DPDK 00:06:33.175 EAL: No shared files mode enabled, IPC will be disabled 00:06:33.175 EAL: Bus pci wants IOVA as 'DC' 00:06:33.175 EAL: Buses did not request a specific IOVA mode. 00:06:33.175 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:33.175 EAL: Selected IOVA mode 'VA' 00:06:33.175 EAL: Probing VFIO support... 00:06:33.175 EAL: IOMMU type 1 (Type 1) is supported 00:06:33.175 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:33.175 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:33.175 EAL: VFIO support initialized 00:06:33.175 EAL: Ask a virtual area of 0x2e000 bytes 00:06:33.175 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:33.175 EAL: Setting up physically contiguous memory... 00:06:33.175 EAL: Setting maximum number of open files to 524288 00:06:33.175 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:33.175 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:33.175 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:33.175 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.175 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:33.175 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.175 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.175 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:33.175 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:33.175 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.175 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:33.175 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.175 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.175 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:33.175 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:33.175 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.175 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:33.175 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.175 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.175 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:33.175 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:33.175 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.175 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:33.175 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.175 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.175 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:33.175 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:33.175 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:33.175 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.175 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:33.175 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:33.175 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.175 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:33.175 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:33.175 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.175 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:33.175 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:33.175 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.175 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:33.175 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:33.175 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.175 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:33.175 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:33.175 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.175 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:33.175 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:33.175 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.175 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:33.175 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:33.175 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.175 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:33.176 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:33.176 EAL: Hugepages will be freed exactly as allocated. 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: TSC frequency is ~2400000 KHz 00:06:33.176 EAL: Main lcore 0 is ready (tid=7f1664de3a00;cpuset=[0]) 00:06:33.176 EAL: Trying to obtain current memory policy. 00:06:33.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.176 EAL: Restoring previous memory policy: 0 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was expanded by 2MB 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:33.176 EAL: Mem event callback 'spdk:(nil)' registered 00:06:33.176 00:06:33.176 00:06:33.176 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.176 http://cunit.sourceforge.net/ 00:06:33.176 00:06:33.176 00:06:33.176 Suite: components_suite 00:06:33.176 Test: vtophys_malloc_test ...passed 00:06:33.176 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:33.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.176 EAL: Restoring previous memory policy: 4 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was expanded by 4MB 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was shrunk by 4MB 00:06:33.176 EAL: Trying to obtain current memory policy. 00:06:33.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.176 EAL: Restoring previous memory policy: 4 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was expanded by 6MB 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was shrunk by 6MB 00:06:33.176 EAL: Trying to obtain current memory policy. 00:06:33.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.176 EAL: Restoring previous memory policy: 4 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was expanded by 10MB 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was shrunk by 10MB 00:06:33.176 EAL: Trying to obtain current memory policy. 00:06:33.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.176 EAL: Restoring previous memory policy: 4 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was expanded by 18MB 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was shrunk by 18MB 00:06:33.176 EAL: Trying to obtain current memory policy. 00:06:33.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.176 EAL: Restoring previous memory policy: 4 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was expanded by 34MB 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was shrunk by 34MB 00:06:33.176 EAL: Trying to obtain current memory policy. 00:06:33.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.176 EAL: Restoring previous memory policy: 4 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was expanded by 66MB 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was shrunk by 66MB 00:06:33.176 EAL: Trying to obtain current memory policy. 00:06:33.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.176 EAL: Restoring previous memory policy: 4 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.176 EAL: request: mp_malloc_sync 00:06:33.176 EAL: No shared files mode enabled, IPC is disabled 00:06:33.176 EAL: Heap on socket 0 was expanded by 130MB 00:06:33.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.478 EAL: request: mp_malloc_sync 00:06:33.478 EAL: No shared files mode enabled, IPC is disabled 00:06:33.478 EAL: Heap on socket 0 was shrunk by 130MB 00:06:33.478 EAL: Trying to obtain current memory policy. 00:06:33.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.478 EAL: Restoring previous memory policy: 4 00:06:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.478 EAL: request: mp_malloc_sync 00:06:33.478 EAL: No shared files mode enabled, IPC is disabled 00:06:33.478 EAL: Heap on socket 0 was expanded by 258MB 00:06:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.478 EAL: request: mp_malloc_sync 00:06:33.478 EAL: No shared files mode enabled, IPC is disabled 00:06:33.478 EAL: Heap on socket 0 was shrunk by 258MB 00:06:33.478 EAL: Trying to obtain current memory policy. 00:06:33.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.478 EAL: Restoring previous memory policy: 4 00:06:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.478 EAL: request: mp_malloc_sync 00:06:33.478 EAL: No shared files mode enabled, IPC is disabled 00:06:33.478 EAL: Heap on socket 0 was expanded by 514MB 00:06:33.478 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.478 EAL: request: mp_malloc_sync 00:06:33.478 EAL: No shared files mode enabled, IPC is disabled 00:06:33.478 EAL: Heap on socket 0 was shrunk by 514MB 00:06:33.478 EAL: Trying to obtain current memory policy. 00:06:33.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.739 EAL: Restoring previous memory policy: 4 00:06:33.739 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.739 EAL: request: mp_malloc_sync 00:06:33.739 EAL: No shared files mode enabled, IPC is disabled 00:06:33.739 EAL: Heap on socket 0 was expanded by 1026MB 00:06:33.739 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.999 EAL: request: mp_malloc_sync 00:06:33.999 EAL: No shared files mode enabled, IPC is disabled 00:06:33.999 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:33.999 passed 00:06:33.999 00:06:33.999 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.000 suites 1 1 n/a 0 0 00:06:34.000 tests 2 2 2 0 0 00:06:34.000 asserts 497 497 497 0 n/a 00:06:34.000 00:06:34.000 Elapsed time = 0.689 seconds 00:06:34.000 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.000 EAL: request: mp_malloc_sync 00:06:34.000 EAL: No shared files mode enabled, IPC is disabled 00:06:34.000 EAL: Heap on socket 0 was shrunk by 2MB 00:06:34.000 EAL: No shared files mode enabled, IPC is disabled 00:06:34.000 EAL: No shared files mode enabled, IPC is disabled 00:06:34.000 EAL: No shared files mode enabled, IPC is disabled 00:06:34.000 00:06:34.000 real 0m0.822s 00:06:34.000 user 0m0.431s 00:06:34.000 sys 0m0.362s 00:06:34.000 14:26:40 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.000 14:26:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:34.000 ************************************ 00:06:34.000 END TEST env_vtophys 00:06:34.000 ************************************ 00:06:34.000 14:26:40 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:34.000 14:26:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.000 14:26:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.000 14:26:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.000 ************************************ 00:06:34.000 START TEST env_pci 00:06:34.000 ************************************ 00:06:34.000 14:26:40 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:34.000 00:06:34.000 00:06:34.000 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.000 http://cunit.sourceforge.net/ 00:06:34.000 00:06:34.000 00:06:34.000 Suite: pci 00:06:34.000 Test: pci_hook ...[2024-11-20 14:26:40.902487] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3655348 has claimed it 00:06:34.000 EAL: Cannot find device (10000:00:01.0) 00:06:34.000 EAL: Failed to attach device on primary process 00:06:34.000 passed 00:06:34.000 00:06:34.000 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.000 suites 1 1 n/a 0 0 00:06:34.000 tests 1 1 1 0 0 00:06:34.000 asserts 25 25 25 0 n/a 00:06:34.000 00:06:34.000 Elapsed time = 0.024 seconds 00:06:34.000 00:06:34.000 real 0m0.035s 00:06:34.000 user 0m0.011s 00:06:34.000 sys 0m0.023s 00:06:34.000 14:26:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.000 14:26:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:34.000 ************************************ 00:06:34.000 END TEST env_pci 00:06:34.000 ************************************ 00:06:34.000 14:26:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:34.000 14:26:40 env -- env/env.sh@15 -- # uname 00:06:34.000 14:26:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:34.000 14:26:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:34.000 14:26:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.000 14:26:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:34.000 14:26:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.000 14:26:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.000 ************************************ 00:06:34.000 START TEST env_dpdk_post_init 00:06:34.000 ************************************ 00:06:34.000 14:26:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.000 EAL: Detected CPU lcores: 128 00:06:34.000 EAL: Detected NUMA nodes: 2 00:06:34.000 EAL: Detected shared linkage of DPDK 00:06:34.000 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:34.000 EAL: Selected IOVA mode 'VA' 00:06:34.000 EAL: VFIO support initialized 00:06:34.000 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:34.259 EAL: Using IOMMU type 1 (Type 1) 00:06:34.259 EAL: Ignore mapping IO port bar(1) 00:06:34.519 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:34.519 EAL: Ignore mapping IO port bar(1) 00:06:34.519 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:34.780 EAL: Ignore mapping IO port bar(1) 00:06:34.780 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:35.039 EAL: Ignore mapping IO port bar(1) 00:06:35.039 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:35.299 EAL: Ignore mapping IO port bar(1) 00:06:35.299 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:35.299 EAL: Ignore mapping IO port bar(1) 00:06:35.559 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:35.559 EAL: Ignore mapping IO port bar(1) 00:06:35.819 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:35.819 EAL: Ignore mapping IO port bar(1) 00:06:36.079 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:36.079 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:36.339 EAL: Ignore mapping IO port bar(1) 00:06:36.339 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:36.599 EAL: Ignore mapping IO port bar(1) 00:06:36.599 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:36.858 EAL: Ignore mapping IO port bar(1) 00:06:36.858 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:36.858 EAL: Ignore mapping IO port bar(1) 00:06:37.117 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:37.117 EAL: Ignore mapping IO port bar(1) 00:06:37.377 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:37.377 EAL: Ignore mapping IO port bar(1) 00:06:37.637 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:37.637 EAL: Ignore mapping IO port bar(1) 00:06:37.637 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:37.897 EAL: Ignore mapping IO port bar(1) 00:06:37.897 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:37.897 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:37.898 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:38.157 Starting DPDK initialization... 00:06:38.157 Starting SPDK post initialization... 00:06:38.157 SPDK NVMe probe 00:06:38.157 Attaching to 0000:65:00.0 00:06:38.157 Attached to 0000:65:00.0 00:06:38.157 Cleaning up... 00:06:40.065 00:06:40.065 real 0m5.721s 00:06:40.065 user 0m0.104s 00:06:40.065 sys 0m0.175s 00:06:40.065 14:26:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.065 14:26:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:40.065 ************************************ 00:06:40.065 END TEST env_dpdk_post_init 00:06:40.065 ************************************ 00:06:40.065 14:26:46 env -- env/env.sh@26 -- # uname 00:06:40.065 14:26:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:40.065 14:26:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:40.065 14:26:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.065 14:26:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.065 14:26:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.065 ************************************ 00:06:40.065 START TEST env_mem_callbacks 00:06:40.065 ************************************ 00:06:40.065 14:26:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:40.065 EAL: Detected CPU lcores: 128 00:06:40.065 EAL: Detected NUMA nodes: 2 00:06:40.065 EAL: Detected shared linkage of DPDK 00:06:40.065 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:40.065 EAL: Selected IOVA mode 'VA' 00:06:40.065 EAL: VFIO support initialized 00:06:40.065 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:40.065 00:06:40.065 00:06:40.065 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.065 http://cunit.sourceforge.net/ 00:06:40.065 00:06:40.065 00:06:40.065 Suite: memory 00:06:40.065 Test: test ... 00:06:40.065 register 0x200000200000 2097152 00:06:40.065 malloc 3145728 00:06:40.065 register 0x200000400000 4194304 00:06:40.065 buf 0x200000500000 len 3145728 PASSED 00:06:40.065 malloc 64 00:06:40.065 buf 0x2000004fff40 len 64 PASSED 00:06:40.066 malloc 4194304 00:06:40.066 register 0x200000800000 6291456 00:06:40.066 buf 0x200000a00000 len 4194304 PASSED 00:06:40.066 free 0x200000500000 3145728 00:06:40.066 free 0x2000004fff40 64 00:06:40.066 unregister 0x200000400000 4194304 PASSED 00:06:40.066 free 0x200000a00000 4194304 00:06:40.066 unregister 0x200000800000 6291456 PASSED 00:06:40.066 malloc 8388608 00:06:40.066 register 0x200000400000 10485760 00:06:40.066 buf 0x200000600000 len 8388608 PASSED 00:06:40.066 free 0x200000600000 8388608 00:06:40.066 unregister 0x200000400000 10485760 PASSED 00:06:40.066 passed 00:06:40.066 00:06:40.066 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.066 suites 1 1 n/a 0 0 00:06:40.066 tests 1 1 1 0 0 00:06:40.066 asserts 15 15 15 0 n/a 00:06:40.066 00:06:40.066 Elapsed time = 0.008 seconds 00:06:40.066 00:06:40.066 real 0m0.055s 00:06:40.066 user 0m0.016s 00:06:40.066 sys 0m0.038s 00:06:40.066 14:26:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.066 14:26:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:40.066 ************************************ 00:06:40.066 END TEST env_mem_callbacks 00:06:40.066 ************************************ 00:06:40.066 00:06:40.066 real 0m7.219s 00:06:40.066 user 0m0.904s 00:06:40.066 sys 0m0.865s 00:06:40.066 14:26:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.066 14:26:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.066 ************************************ 00:06:40.066 END TEST env 00:06:40.066 ************************************ 00:06:40.066 14:26:46 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:40.066 14:26:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.066 14:26:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.066 14:26:46 -- common/autotest_common.sh@10 -- # set +x 00:06:40.066 ************************************ 00:06:40.066 START TEST rpc 00:06:40.066 ************************************ 00:06:40.066 14:26:46 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:40.066 * Looking for test storage... 00:06:40.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:40.066 14:26:46 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.066 14:26:46 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.066 14:26:46 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.066 14:26:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.066 14:26:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.066 14:26:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.066 14:26:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.066 14:26:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.066 14:26:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.066 14:26:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.066 14:26:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.066 14:26:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.066 14:26:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.066 14:26:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.066 14:26:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:40.066 14:26:47 rpc -- scripts/common.sh@345 -- # : 1 00:06:40.066 14:26:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.066 14:26:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.066 14:26:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:40.066 14:26:47 rpc -- scripts/common.sh@353 -- # local d=1 00:06:40.066 14:26:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.066 14:26:47 rpc -- scripts/common.sh@355 -- # echo 1 00:06:40.066 14:26:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.066 14:26:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:40.066 14:26:47 rpc -- scripts/common.sh@353 -- # local d=2 00:06:40.066 14:26:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.066 14:26:47 rpc -- scripts/common.sh@355 -- # echo 2 00:06:40.066 14:26:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.066 14:26:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.066 14:26:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.066 14:26:47 rpc -- scripts/common.sh@368 -- # return 0 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.066 --rc genhtml_branch_coverage=1 00:06:40.066 --rc genhtml_function_coverage=1 00:06:40.066 --rc genhtml_legend=1 00:06:40.066 --rc geninfo_all_blocks=1 00:06:40.066 --rc geninfo_unexecuted_blocks=1 00:06:40.066 00:06:40.066 ' 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.066 --rc genhtml_branch_coverage=1 00:06:40.066 --rc genhtml_function_coverage=1 00:06:40.066 --rc genhtml_legend=1 00:06:40.066 --rc geninfo_all_blocks=1 00:06:40.066 --rc geninfo_unexecuted_blocks=1 00:06:40.066 00:06:40.066 ' 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.066 --rc genhtml_branch_coverage=1 00:06:40.066 --rc genhtml_function_coverage=1 00:06:40.066 --rc genhtml_legend=1 00:06:40.066 --rc geninfo_all_blocks=1 00:06:40.066 --rc geninfo_unexecuted_blocks=1 00:06:40.066 00:06:40.066 ' 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.066 --rc genhtml_branch_coverage=1 00:06:40.066 --rc genhtml_function_coverage=1 00:06:40.066 --rc genhtml_legend=1 00:06:40.066 --rc geninfo_all_blocks=1 00:06:40.066 --rc geninfo_unexecuted_blocks=1 00:06:40.066 00:06:40.066 ' 00:06:40.066 14:26:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3656803 00:06:40.066 14:26:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.066 14:26:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3656803 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@835 -- # '[' -z 3656803 ']' 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.066 14:26:47 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.066 14:26:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.066 [2024-11-20 14:26:47.070975] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:06:40.066 [2024-11-20 14:26:47.071045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656803 ] 00:06:40.326 [2024-11-20 14:26:47.154476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.326 [2024-11-20 14:26:47.206040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:40.326 [2024-11-20 14:26:47.206093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3656803' to capture a snapshot of events at runtime. 00:06:40.326 [2024-11-20 14:26:47.206102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.326 [2024-11-20 14:26:47.206109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.326 [2024-11-20 14:26:47.206115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3656803 for offline analysis/debug. 00:06:40.326 [2024-11-20 14:26:47.206921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.893 14:26:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.893 14:26:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.893 14:26:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:40.893 14:26:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:40.893 14:26:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:40.893 14:26:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:40.893 14:26:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.893 14:26:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.893 14:26:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.893 ************************************ 00:06:40.893 START TEST rpc_integrity 00:06:40.893 ************************************ 00:06:40.893 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:40.893 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:40.893 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.893 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.893 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.893 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:40.893 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:40.893 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:40.893 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:40.893 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.893 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.153 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.153 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:41.153 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:41.153 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.153 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.153 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.153 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:41.153 { 00:06:41.153 "name": "Malloc0", 00:06:41.153 "aliases": [ 00:06:41.153 "06f36f76-9648-40e5-a7f9-13c931abff29" 00:06:41.153 ], 00:06:41.153 "product_name": "Malloc disk", 00:06:41.153 "block_size": 512, 00:06:41.153 "num_blocks": 16384, 00:06:41.153 "uuid": "06f36f76-9648-40e5-a7f9-13c931abff29", 00:06:41.153 "assigned_rate_limits": { 00:06:41.153 "rw_ios_per_sec": 0, 00:06:41.153 "rw_mbytes_per_sec": 0, 00:06:41.153 "r_mbytes_per_sec": 0, 00:06:41.153 "w_mbytes_per_sec": 0 00:06:41.153 }, 00:06:41.153 "claimed": false, 00:06:41.153 "zoned": false, 00:06:41.153 "supported_io_types": { 00:06:41.153 "read": true, 00:06:41.153 "write": true, 00:06:41.153 "unmap": true, 00:06:41.153 "flush": true, 00:06:41.153 "reset": true, 00:06:41.153 "nvme_admin": false, 00:06:41.153 "nvme_io": false, 00:06:41.153 "nvme_io_md": false, 00:06:41.153 "write_zeroes": true, 00:06:41.153 "zcopy": true, 00:06:41.153 "get_zone_info": false, 00:06:41.153 "zone_management": false, 00:06:41.153 "zone_append": false, 00:06:41.153 "compare": false, 00:06:41.153 "compare_and_write": false, 00:06:41.153 "abort": true, 00:06:41.153 "seek_hole": false, 00:06:41.153 "seek_data": false, 00:06:41.153 "copy": true, 00:06:41.153 "nvme_iov_md": false 00:06:41.153 }, 00:06:41.153 "memory_domains": [ 00:06:41.153 { 00:06:41.153 "dma_device_id": "system", 00:06:41.153 "dma_device_type": 1 00:06:41.153 }, 00:06:41.153 { 00:06:41.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.153 "dma_device_type": 2 00:06:41.153 } 00:06:41.153 ], 00:06:41.153 "driver_specific": {} 00:06:41.153 } 00:06:41.153 ]' 00:06:41.153 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:41.153 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:41.153 14:26:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:41.153 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.153 14:26:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.153 [2024-11-20 14:26:48.002895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:41.153 [2024-11-20 14:26:48.002939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.153 [2024-11-20 14:26:48.002954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105f580 00:06:41.153 [2024-11-20 14:26:48.002962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.153 [2024-11-20 14:26:48.004522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.153 [2024-11-20 14:26:48.004559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:41.153 Passthru0 00:06:41.153 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.153 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:41.153 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.153 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.153 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.153 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:41.153 { 00:06:41.153 "name": "Malloc0", 00:06:41.153 "aliases": [ 00:06:41.153 "06f36f76-9648-40e5-a7f9-13c931abff29" 00:06:41.153 ], 00:06:41.153 "product_name": "Malloc disk", 00:06:41.153 "block_size": 512, 00:06:41.153 "num_blocks": 16384, 00:06:41.153 "uuid": "06f36f76-9648-40e5-a7f9-13c931abff29", 00:06:41.153 "assigned_rate_limits": { 00:06:41.154 "rw_ios_per_sec": 0, 00:06:41.154 "rw_mbytes_per_sec": 0, 00:06:41.154 "r_mbytes_per_sec": 0, 00:06:41.154 "w_mbytes_per_sec": 0 00:06:41.154 }, 00:06:41.154 "claimed": true, 00:06:41.154 "claim_type": "exclusive_write", 00:06:41.154 "zoned": false, 00:06:41.154 "supported_io_types": { 00:06:41.154 "read": true, 00:06:41.154 "write": true, 00:06:41.154 "unmap": true, 00:06:41.154 "flush": true, 00:06:41.154 "reset": true, 00:06:41.154 "nvme_admin": false, 00:06:41.154 "nvme_io": false, 00:06:41.154 "nvme_io_md": false, 00:06:41.154 "write_zeroes": true, 00:06:41.154 "zcopy": true, 00:06:41.154 "get_zone_info": false, 00:06:41.154 "zone_management": false, 00:06:41.154 "zone_append": false, 00:06:41.154 "compare": false, 00:06:41.154 "compare_and_write": false, 00:06:41.154 "abort": true, 00:06:41.154 "seek_hole": false, 00:06:41.154 "seek_data": false, 00:06:41.154 "copy": true, 00:06:41.154 "nvme_iov_md": false 00:06:41.154 }, 00:06:41.154 "memory_domains": [ 00:06:41.154 { 00:06:41.154 "dma_device_id": "system", 00:06:41.154 "dma_device_type": 1 00:06:41.154 }, 00:06:41.154 { 00:06:41.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.154 "dma_device_type": 2 00:06:41.154 } 00:06:41.154 ], 00:06:41.154 "driver_specific": {} 00:06:41.154 }, 00:06:41.154 { 00:06:41.154 "name": "Passthru0", 00:06:41.154 "aliases": [ 00:06:41.154 "a1ea268b-4c16-5988-ae87-1409c0d79606" 00:06:41.154 ], 00:06:41.154 "product_name": "passthru", 00:06:41.154 "block_size": 512, 00:06:41.154 "num_blocks": 16384, 00:06:41.154 "uuid": "a1ea268b-4c16-5988-ae87-1409c0d79606", 00:06:41.154 "assigned_rate_limits": { 00:06:41.154 "rw_ios_per_sec": 0, 00:06:41.154 "rw_mbytes_per_sec": 0, 00:06:41.154 "r_mbytes_per_sec": 0, 00:06:41.154 "w_mbytes_per_sec": 0 00:06:41.154 }, 00:06:41.154 "claimed": false, 00:06:41.154 "zoned": false, 00:06:41.154 "supported_io_types": { 00:06:41.154 "read": true, 00:06:41.154 "write": true, 00:06:41.154 "unmap": true, 00:06:41.154 "flush": true, 00:06:41.154 "reset": true, 00:06:41.154 "nvme_admin": false, 00:06:41.154 "nvme_io": false, 00:06:41.154 "nvme_io_md": false, 00:06:41.154 "write_zeroes": true, 00:06:41.154 "zcopy": true, 00:06:41.154 "get_zone_info": false, 00:06:41.154 "zone_management": false, 00:06:41.154 "zone_append": false, 00:06:41.154 "compare": false, 00:06:41.154 "compare_and_write": false, 00:06:41.154 "abort": true, 00:06:41.154 "seek_hole": false, 00:06:41.154 "seek_data": false, 00:06:41.154 "copy": true, 00:06:41.154 "nvme_iov_md": false 00:06:41.154 }, 00:06:41.154 "memory_domains": [ 00:06:41.154 { 00:06:41.154 "dma_device_id": "system", 00:06:41.154 "dma_device_type": 1 00:06:41.154 }, 00:06:41.154 { 00:06:41.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.154 "dma_device_type": 2 00:06:41.154 } 00:06:41.154 ], 00:06:41.154 "driver_specific": { 00:06:41.154 "passthru": { 00:06:41.154 "name": "Passthru0", 00:06:41.154 "base_bdev_name": "Malloc0" 00:06:41.154 } 00:06:41.154 } 00:06:41.154 } 00:06:41.154 ]' 00:06:41.154 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:41.154 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:41.154 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.154 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.154 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.154 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:41.154 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:41.154 14:26:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:41.154 00:06:41.154 real 0m0.205s 00:06:41.154 user 0m0.112s 00:06:41.154 sys 0m0.031s 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.154 14:26:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.154 ************************************ 00:06:41.154 END TEST rpc_integrity 00:06:41.154 ************************************ 00:06:41.154 14:26:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:41.154 14:26:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.154 14:26:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.154 14:26:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.154 ************************************ 00:06:41.154 START TEST rpc_plugins 00:06:41.154 ************************************ 00:06:41.154 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:41.154 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:41.154 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.154 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:41.154 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.154 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:41.154 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:41.154 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.154 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:41.154 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.154 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:41.154 { 00:06:41.154 "name": "Malloc1", 00:06:41.154 "aliases": [ 00:06:41.154 "26255212-6395-4926-b15c-650c9f142401" 00:06:41.154 ], 00:06:41.154 "product_name": "Malloc disk", 00:06:41.154 "block_size": 4096, 00:06:41.154 "num_blocks": 256, 00:06:41.154 "uuid": "26255212-6395-4926-b15c-650c9f142401", 00:06:41.154 "assigned_rate_limits": { 00:06:41.154 "rw_ios_per_sec": 0, 00:06:41.154 "rw_mbytes_per_sec": 0, 00:06:41.154 "r_mbytes_per_sec": 0, 00:06:41.154 "w_mbytes_per_sec": 0 00:06:41.154 }, 00:06:41.154 "claimed": false, 00:06:41.154 "zoned": false, 00:06:41.154 "supported_io_types": { 00:06:41.154 "read": true, 00:06:41.154 "write": true, 00:06:41.154 "unmap": true, 00:06:41.154 "flush": true, 00:06:41.154 "reset": true, 00:06:41.154 "nvme_admin": false, 00:06:41.154 "nvme_io": false, 00:06:41.154 "nvme_io_md": false, 00:06:41.154 "write_zeroes": true, 00:06:41.154 "zcopy": true, 00:06:41.154 "get_zone_info": false, 00:06:41.154 "zone_management": false, 00:06:41.154 "zone_append": false, 00:06:41.154 "compare": false, 00:06:41.154 "compare_and_write": false, 00:06:41.154 "abort": true, 00:06:41.154 "seek_hole": false, 00:06:41.154 "seek_data": false, 00:06:41.154 "copy": true, 00:06:41.154 "nvme_iov_md": false 00:06:41.154 }, 00:06:41.154 "memory_domains": [ 00:06:41.154 { 00:06:41.154 "dma_device_id": "system", 00:06:41.154 "dma_device_type": 1 00:06:41.154 }, 00:06:41.154 { 00:06:41.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.154 "dma_device_type": 2 00:06:41.154 } 00:06:41.154 ], 00:06:41.154 "driver_specific": {} 00:06:41.154 } 00:06:41.155 ]' 00:06:41.155 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:41.414 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:41.414 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:41.414 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.414 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:41.414 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.414 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:41.414 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.414 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:41.414 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.414 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:41.414 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:41.414 14:26:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:41.414 00:06:41.414 real 0m0.102s 00:06:41.414 user 0m0.058s 00:06:41.414 sys 0m0.015s 00:06:41.414 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.414 14:26:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:41.414 ************************************ 00:06:41.414 END TEST rpc_plugins 00:06:41.414 ************************************ 00:06:41.414 14:26:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:41.414 14:26:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.414 14:26:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.414 14:26:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.414 ************************************ 00:06:41.414 START TEST rpc_trace_cmd_test 00:06:41.414 ************************************ 00:06:41.414 14:26:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:41.414 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:41.414 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:41.414 14:26:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.414 14:26:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.414 14:26:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.414 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:41.414 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3656803", 00:06:41.414 "tpoint_group_mask": "0x8", 00:06:41.414 "iscsi_conn": { 00:06:41.414 "mask": "0x2", 00:06:41.414 "tpoint_mask": "0x0" 00:06:41.414 }, 00:06:41.414 "scsi": { 00:06:41.414 "mask": "0x4", 00:06:41.414 "tpoint_mask": "0x0" 00:06:41.414 }, 00:06:41.414 "bdev": { 00:06:41.414 "mask": "0x8", 00:06:41.414 "tpoint_mask": "0xffffffffffffffff" 00:06:41.414 }, 00:06:41.414 "nvmf_rdma": { 00:06:41.414 "mask": "0x10", 00:06:41.414 "tpoint_mask": "0x0" 00:06:41.414 }, 00:06:41.414 "nvmf_tcp": { 00:06:41.414 "mask": "0x20", 00:06:41.414 "tpoint_mask": "0x0" 00:06:41.414 }, 00:06:41.414 "ftl": { 00:06:41.414 "mask": "0x40", 00:06:41.414 "tpoint_mask": "0x0" 00:06:41.414 }, 00:06:41.414 "blobfs": { 00:06:41.414 "mask": "0x80", 00:06:41.414 "tpoint_mask": "0x0" 00:06:41.414 }, 00:06:41.414 "dsa": { 00:06:41.415 "mask": "0x200", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 }, 00:06:41.415 "thread": { 00:06:41.415 "mask": "0x400", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 }, 00:06:41.415 "nvme_pcie": { 00:06:41.415 "mask": "0x800", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 }, 00:06:41.415 "iaa": { 00:06:41.415 "mask": "0x1000", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 }, 00:06:41.415 "nvme_tcp": { 00:06:41.415 "mask": "0x2000", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 }, 00:06:41.415 "bdev_nvme": { 00:06:41.415 "mask": "0x4000", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 }, 00:06:41.415 "sock": { 00:06:41.415 "mask": "0x8000", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 }, 00:06:41.415 "blob": { 00:06:41.415 "mask": "0x10000", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 }, 00:06:41.415 "bdev_raid": { 00:06:41.415 "mask": "0x20000", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 }, 00:06:41.415 "scheduler": { 00:06:41.415 "mask": "0x40000", 00:06:41.415 "tpoint_mask": "0x0" 00:06:41.415 } 00:06:41.415 }' 00:06:41.415 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:41.415 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:41.415 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:41.415 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:41.415 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:41.415 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:41.415 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:41.415 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:41.415 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:41.675 14:26:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:41.675 00:06:41.675 real 0m0.160s 00:06:41.675 user 0m0.125s 00:06:41.675 sys 0m0.026s 00:06:41.675 14:26:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.675 14:26:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.675 ************************************ 00:06:41.675 END TEST rpc_trace_cmd_test 00:06:41.675 ************************************ 00:06:41.675 14:26:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:41.675 14:26:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:41.675 14:26:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:41.675 14:26:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.675 14:26:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.675 14:26:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.675 ************************************ 00:06:41.675 START TEST rpc_daemon_integrity 00:06:41.675 ************************************ 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:41.675 { 00:06:41.675 "name": "Malloc2", 00:06:41.675 "aliases": [ 00:06:41.675 "90063b52-bb45-4c46-b8c7-c34202cb8147" 00:06:41.675 ], 00:06:41.675 "product_name": "Malloc disk", 00:06:41.675 "block_size": 512, 00:06:41.675 "num_blocks": 16384, 00:06:41.675 "uuid": "90063b52-bb45-4c46-b8c7-c34202cb8147", 00:06:41.675 "assigned_rate_limits": { 00:06:41.675 "rw_ios_per_sec": 0, 00:06:41.675 "rw_mbytes_per_sec": 0, 00:06:41.675 "r_mbytes_per_sec": 0, 00:06:41.675 "w_mbytes_per_sec": 0 00:06:41.675 }, 00:06:41.675 "claimed": false, 00:06:41.675 "zoned": false, 00:06:41.675 "supported_io_types": { 00:06:41.675 "read": true, 00:06:41.675 "write": true, 00:06:41.675 "unmap": true, 00:06:41.675 "flush": true, 00:06:41.675 "reset": true, 00:06:41.675 "nvme_admin": false, 00:06:41.675 "nvme_io": false, 00:06:41.675 "nvme_io_md": false, 00:06:41.675 "write_zeroes": true, 00:06:41.675 "zcopy": true, 00:06:41.675 "get_zone_info": false, 00:06:41.675 "zone_management": false, 00:06:41.675 "zone_append": false, 00:06:41.675 "compare": false, 00:06:41.675 "compare_and_write": false, 00:06:41.675 "abort": true, 00:06:41.675 "seek_hole": false, 00:06:41.675 "seek_data": false, 00:06:41.675 "copy": true, 00:06:41.675 "nvme_iov_md": false 00:06:41.675 }, 00:06:41.675 "memory_domains": [ 00:06:41.675 { 00:06:41.675 "dma_device_id": "system", 00:06:41.675 "dma_device_type": 1 00:06:41.675 }, 00:06:41.675 { 00:06:41.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.675 "dma_device_type": 2 00:06:41.675 } 00:06:41.675 ], 00:06:41.675 "driver_specific": {} 00:06:41.675 } 00:06:41.675 ]' 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.675 [2024-11-20 14:26:48.624569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:41.675 [2024-11-20 14:26:48.624608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.675 [2024-11-20 14:26:48.624622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfacc50 00:06:41.675 [2024-11-20 14:26:48.624630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.675 [2024-11-20 14:26:48.626064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.675 [2024-11-20 14:26:48.626100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:41.675 Passthru0 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.675 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:41.675 { 00:06:41.675 "name": "Malloc2", 00:06:41.675 "aliases": [ 00:06:41.675 "90063b52-bb45-4c46-b8c7-c34202cb8147" 00:06:41.675 ], 00:06:41.675 "product_name": "Malloc disk", 00:06:41.675 "block_size": 512, 00:06:41.675 "num_blocks": 16384, 00:06:41.675 "uuid": "90063b52-bb45-4c46-b8c7-c34202cb8147", 00:06:41.675 "assigned_rate_limits": { 00:06:41.675 "rw_ios_per_sec": 0, 00:06:41.675 "rw_mbytes_per_sec": 0, 00:06:41.675 "r_mbytes_per_sec": 0, 00:06:41.675 "w_mbytes_per_sec": 0 00:06:41.675 }, 00:06:41.675 "claimed": true, 00:06:41.675 "claim_type": "exclusive_write", 00:06:41.675 "zoned": false, 00:06:41.675 "supported_io_types": { 00:06:41.675 "read": true, 00:06:41.675 "write": true, 00:06:41.675 "unmap": true, 00:06:41.675 "flush": true, 00:06:41.675 "reset": true, 00:06:41.675 "nvme_admin": false, 00:06:41.675 "nvme_io": false, 00:06:41.675 "nvme_io_md": false, 00:06:41.675 "write_zeroes": true, 00:06:41.675 "zcopy": true, 00:06:41.675 "get_zone_info": false, 00:06:41.675 "zone_management": false, 00:06:41.675 "zone_append": false, 00:06:41.675 "compare": false, 00:06:41.675 "compare_and_write": false, 00:06:41.675 "abort": true, 00:06:41.675 "seek_hole": false, 00:06:41.675 "seek_data": false, 00:06:41.675 "copy": true, 00:06:41.675 "nvme_iov_md": false 00:06:41.675 }, 00:06:41.675 "memory_domains": [ 00:06:41.675 { 00:06:41.675 "dma_device_id": "system", 00:06:41.675 "dma_device_type": 1 00:06:41.675 }, 00:06:41.675 { 00:06:41.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.675 "dma_device_type": 2 00:06:41.675 } 00:06:41.675 ], 00:06:41.675 "driver_specific": {} 00:06:41.675 }, 00:06:41.675 { 00:06:41.675 "name": "Passthru0", 00:06:41.675 "aliases": [ 00:06:41.675 "62d80d82-2f33-5bf5-9706-bce89a570300" 00:06:41.675 ], 00:06:41.675 "product_name": "passthru", 00:06:41.675 "block_size": 512, 00:06:41.675 "num_blocks": 16384, 00:06:41.675 "uuid": "62d80d82-2f33-5bf5-9706-bce89a570300", 00:06:41.675 "assigned_rate_limits": { 00:06:41.675 "rw_ios_per_sec": 0, 00:06:41.675 "rw_mbytes_per_sec": 0, 00:06:41.676 "r_mbytes_per_sec": 0, 00:06:41.676 "w_mbytes_per_sec": 0 00:06:41.676 }, 00:06:41.676 "claimed": false, 00:06:41.676 "zoned": false, 00:06:41.676 "supported_io_types": { 00:06:41.676 "read": true, 00:06:41.676 "write": true, 00:06:41.676 "unmap": true, 00:06:41.676 "flush": true, 00:06:41.676 "reset": true, 00:06:41.676 "nvme_admin": false, 00:06:41.676 "nvme_io": false, 00:06:41.676 "nvme_io_md": false, 00:06:41.676 "write_zeroes": true, 00:06:41.676 "zcopy": true, 00:06:41.676 "get_zone_info": false, 00:06:41.676 "zone_management": false, 00:06:41.676 "zone_append": false, 00:06:41.676 "compare": false, 00:06:41.676 "compare_and_write": false, 00:06:41.676 "abort": true, 00:06:41.676 "seek_hole": false, 00:06:41.676 "seek_data": false, 00:06:41.676 "copy": true, 00:06:41.676 "nvme_iov_md": false 00:06:41.676 }, 00:06:41.676 "memory_domains": [ 00:06:41.676 { 00:06:41.676 "dma_device_id": "system", 00:06:41.676 "dma_device_type": 1 00:06:41.676 }, 00:06:41.676 { 00:06:41.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.676 "dma_device_type": 2 00:06:41.676 } 00:06:41.676 ], 00:06:41.676 "driver_specific": { 00:06:41.676 "passthru": { 00:06:41.676 "name": "Passthru0", 00:06:41.676 "base_bdev_name": "Malloc2" 00:06:41.676 } 00:06:41.676 } 00:06:41.676 } 00:06:41.676 ]' 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:41.676 00:06:41.676 real 0m0.203s 00:06:41.676 user 0m0.116s 00:06:41.676 sys 0m0.029s 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.676 14:26:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.676 ************************************ 00:06:41.676 END TEST rpc_daemon_integrity 00:06:41.676 ************************************ 00:06:41.935 14:26:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:41.935 14:26:48 rpc -- rpc/rpc.sh@84 -- # killprocess 3656803 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 3656803 ']' 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@958 -- # kill -0 3656803 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@959 -- # uname 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3656803 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3656803' 00:06:41.935 killing process with pid 3656803 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@973 -- # kill 3656803 00:06:41.935 14:26:48 rpc -- common/autotest_common.sh@978 -- # wait 3656803 00:06:42.194 00:06:42.194 real 0m2.160s 00:06:42.194 user 0m2.592s 00:06:42.194 sys 0m0.674s 00:06:42.194 14:26:49 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.194 14:26:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.194 ************************************ 00:06:42.194 END TEST rpc 00:06:42.194 ************************************ 00:06:42.194 14:26:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:42.194 14:26:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.194 14:26:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.194 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:06:42.194 ************************************ 00:06:42.194 START TEST skip_rpc 00:06:42.194 ************************************ 00:06:42.194 14:26:49 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:42.194 * Looking for test storage... 00:06:42.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:42.194 14:26:49 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.194 14:26:49 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.194 14:26:49 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.194 14:26:49 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.194 14:26:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.194 14:26:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.194 14:26:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.195 14:26:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:42.195 14:26:49 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.195 14:26:49 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.195 --rc genhtml_branch_coverage=1 00:06:42.195 --rc genhtml_function_coverage=1 00:06:42.195 --rc genhtml_legend=1 00:06:42.195 --rc geninfo_all_blocks=1 00:06:42.195 --rc geninfo_unexecuted_blocks=1 00:06:42.195 00:06:42.195 ' 00:06:42.195 14:26:49 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.195 --rc genhtml_branch_coverage=1 00:06:42.195 --rc genhtml_function_coverage=1 00:06:42.195 --rc genhtml_legend=1 00:06:42.195 --rc geninfo_all_blocks=1 00:06:42.195 --rc geninfo_unexecuted_blocks=1 00:06:42.195 00:06:42.195 ' 00:06:42.195 14:26:49 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.195 --rc genhtml_branch_coverage=1 00:06:42.195 --rc genhtml_function_coverage=1 00:06:42.195 --rc genhtml_legend=1 00:06:42.195 --rc geninfo_all_blocks=1 00:06:42.195 --rc geninfo_unexecuted_blocks=1 00:06:42.195 00:06:42.195 ' 00:06:42.195 14:26:49 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.195 --rc genhtml_branch_coverage=1 00:06:42.195 --rc genhtml_function_coverage=1 00:06:42.195 --rc genhtml_legend=1 00:06:42.195 --rc geninfo_all_blocks=1 00:06:42.195 --rc geninfo_unexecuted_blocks=1 00:06:42.195 00:06:42.195 ' 00:06:42.195 14:26:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:42.195 14:26:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:42.195 14:26:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:42.195 14:26:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.195 14:26:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.195 14:26:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.195 ************************************ 00:06:42.195 START TEST skip_rpc 00:06:42.195 ************************************ 00:06:42.195 14:26:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:42.454 14:26:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3657494 00:06:42.454 14:26:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.454 14:26:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:42.454 14:26:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:42.454 [2024-11-20 14:26:49.301447] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:06:42.454 [2024-11-20 14:26:49.301514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657494 ] 00:06:42.454 [2024-11-20 14:26:49.387189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.454 [2024-11-20 14:26:49.441029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3657494 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3657494 ']' 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3657494 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3657494 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3657494' 00:06:47.724 killing process with pid 3657494 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3657494 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3657494 00:06:47.724 00:06:47.724 real 0m5.244s 00:06:47.724 user 0m5.002s 00:06:47.724 sys 0m0.272s 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.724 14:26:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.724 ************************************ 00:06:47.724 END TEST skip_rpc 00:06:47.724 ************************************ 00:06:47.724 14:26:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:47.724 14:26:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.724 14:26:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.724 14:26:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.724 ************************************ 00:06:47.724 START TEST skip_rpc_with_json 00:06:47.724 ************************************ 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3658691 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3658691 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3658691 ']' 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.724 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.724 [2024-11-20 14:26:54.585005] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:06:47.724 [2024-11-20 14:26:54.585053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658691 ] 00:06:47.724 [2024-11-20 14:26:54.650400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.724 [2024-11-20 14:26:54.682042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.984 [2024-11-20 14:26:54.849221] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:47.984 request: 00:06:47.984 { 00:06:47.984 "trtype": "tcp", 00:06:47.984 "method": "nvmf_get_transports", 00:06:47.984 "req_id": 1 00:06:47.984 } 00:06:47.984 Got JSON-RPC error response 00:06:47.984 response: 00:06:47.984 { 00:06:47.984 "code": -19, 00:06:47.984 "message": "No such device" 00:06:47.984 } 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.984 [2024-11-20 14:26:54.857305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.984 14:26:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.984 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.984 14:26:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:47.984 { 00:06:47.984 "subsystems": [ 00:06:47.984 { 00:06:47.984 "subsystem": "fsdev", 00:06:47.984 "config": [ 00:06:47.984 { 00:06:47.984 "method": "fsdev_set_opts", 00:06:47.984 "params": { 00:06:47.984 "fsdev_io_pool_size": 65535, 00:06:47.984 "fsdev_io_cache_size": 256 00:06:47.984 } 00:06:47.984 } 00:06:47.984 ] 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "subsystem": "vfio_user_target", 00:06:47.984 "config": null 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "subsystem": "keyring", 00:06:47.984 "config": [] 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "subsystem": "iobuf", 00:06:47.984 "config": [ 00:06:47.984 { 00:06:47.984 "method": "iobuf_set_options", 00:06:47.984 "params": { 00:06:47.984 "small_pool_count": 8192, 00:06:47.984 "large_pool_count": 1024, 00:06:47.984 "small_bufsize": 8192, 00:06:47.984 "large_bufsize": 135168, 00:06:47.984 "enable_numa": false 00:06:47.984 } 00:06:47.984 } 00:06:47.984 ] 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "subsystem": "sock", 00:06:47.984 "config": [ 00:06:47.984 { 00:06:47.984 "method": "sock_set_default_impl", 00:06:47.984 "params": { 00:06:47.984 "impl_name": "posix" 00:06:47.984 } 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "method": "sock_impl_set_options", 00:06:47.984 "params": { 00:06:47.984 "impl_name": "ssl", 00:06:47.984 "recv_buf_size": 4096, 00:06:47.984 "send_buf_size": 4096, 00:06:47.984 "enable_recv_pipe": true, 00:06:47.984 "enable_quickack": false, 00:06:47.984 "enable_placement_id": 0, 00:06:47.984 "enable_zerocopy_send_server": true, 00:06:47.984 "enable_zerocopy_send_client": false, 00:06:47.984 "zerocopy_threshold": 0, 00:06:47.984 "tls_version": 0, 00:06:47.984 "enable_ktls": false 00:06:47.984 } 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "method": "sock_impl_set_options", 00:06:47.984 "params": { 00:06:47.984 "impl_name": "posix", 00:06:47.984 "recv_buf_size": 2097152, 00:06:47.984 "send_buf_size": 2097152, 00:06:47.984 "enable_recv_pipe": true, 00:06:47.984 "enable_quickack": false, 00:06:47.984 "enable_placement_id": 0, 00:06:47.984 "enable_zerocopy_send_server": true, 00:06:47.984 "enable_zerocopy_send_client": false, 00:06:47.984 "zerocopy_threshold": 0, 00:06:47.984 "tls_version": 0, 00:06:47.984 "enable_ktls": false 00:06:47.984 } 00:06:47.984 } 00:06:47.984 ] 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "subsystem": "vmd", 00:06:47.984 "config": [] 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "subsystem": "accel", 00:06:47.984 "config": [ 00:06:47.984 { 00:06:47.984 "method": "accel_set_options", 00:06:47.984 "params": { 00:06:47.984 "small_cache_size": 128, 00:06:47.984 "large_cache_size": 16, 00:06:47.984 "task_count": 2048, 00:06:47.984 "sequence_count": 2048, 00:06:47.984 "buf_count": 2048 00:06:47.984 } 00:06:47.984 } 00:06:47.984 ] 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "subsystem": "bdev", 00:06:47.984 "config": [ 00:06:47.984 { 00:06:47.984 "method": "bdev_set_options", 00:06:47.984 "params": { 00:06:47.984 "bdev_io_pool_size": 65535, 00:06:47.984 "bdev_io_cache_size": 256, 00:06:47.984 "bdev_auto_examine": true, 00:06:47.984 "iobuf_small_cache_size": 128, 00:06:47.984 "iobuf_large_cache_size": 16 00:06:47.984 } 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "method": "bdev_raid_set_options", 00:06:47.984 "params": { 00:06:47.984 "process_window_size_kb": 1024, 00:06:47.984 "process_max_bandwidth_mb_sec": 0 00:06:47.984 } 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "method": "bdev_iscsi_set_options", 00:06:47.984 "params": { 00:06:47.984 "timeout_sec": 30 00:06:47.984 } 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "method": "bdev_nvme_set_options", 00:06:47.984 "params": { 00:06:47.984 "action_on_timeout": "none", 00:06:47.984 "timeout_us": 0, 00:06:47.984 "timeout_admin_us": 0, 00:06:47.984 "keep_alive_timeout_ms": 10000, 00:06:47.984 "arbitration_burst": 0, 00:06:47.984 "low_priority_weight": 0, 00:06:47.984 "medium_priority_weight": 0, 00:06:47.984 "high_priority_weight": 0, 00:06:47.984 "nvme_adminq_poll_period_us": 10000, 00:06:47.984 "nvme_ioq_poll_period_us": 0, 00:06:47.984 "io_queue_requests": 0, 00:06:47.984 "delay_cmd_submit": true, 00:06:47.984 "transport_retry_count": 4, 00:06:47.984 "bdev_retry_count": 3, 00:06:47.984 "transport_ack_timeout": 0, 00:06:47.984 "ctrlr_loss_timeout_sec": 0, 00:06:47.984 "reconnect_delay_sec": 0, 00:06:47.984 "fast_io_fail_timeout_sec": 0, 00:06:47.984 "disable_auto_failback": false, 00:06:47.984 "generate_uuids": false, 00:06:47.984 "transport_tos": 0, 00:06:47.984 "nvme_error_stat": false, 00:06:47.984 "rdma_srq_size": 0, 00:06:47.984 "io_path_stat": false, 00:06:47.984 "allow_accel_sequence": false, 00:06:47.984 "rdma_max_cq_size": 0, 00:06:47.984 "rdma_cm_event_timeout_ms": 0, 00:06:47.984 "dhchap_digests": [ 00:06:47.984 "sha256", 00:06:47.984 "sha384", 00:06:47.984 "sha512" 00:06:47.984 ], 00:06:47.984 "dhchap_dhgroups": [ 00:06:47.984 "null", 00:06:47.984 "ffdhe2048", 00:06:47.984 "ffdhe3072", 00:06:47.984 "ffdhe4096", 00:06:47.984 "ffdhe6144", 00:06:47.984 "ffdhe8192" 00:06:47.984 ] 00:06:47.984 } 00:06:47.984 }, 00:06:47.984 { 00:06:47.984 "method": "bdev_nvme_set_hotplug", 00:06:47.984 "params": { 00:06:47.984 "period_us": 100000, 00:06:47.984 "enable": false 00:06:47.984 } 00:06:47.984 }, 00:06:47.985 { 00:06:47.985 "method": "bdev_wait_for_examine" 00:06:47.985 } 00:06:47.985 ] 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "subsystem": "scsi", 00:06:47.985 "config": null 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "subsystem": "scheduler", 00:06:47.985 "config": [ 00:06:47.985 { 00:06:47.985 "method": "framework_set_scheduler", 00:06:47.985 "params": { 00:06:47.985 "name": "static" 00:06:47.985 } 00:06:47.985 } 00:06:47.985 ] 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "subsystem": "vhost_scsi", 00:06:47.985 "config": [] 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "subsystem": "vhost_blk", 00:06:47.985 "config": [] 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "subsystem": "ublk", 00:06:47.985 "config": [] 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "subsystem": "nbd", 00:06:47.985 "config": [] 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "subsystem": "nvmf", 00:06:47.985 "config": [ 00:06:47.985 { 00:06:47.985 "method": "nvmf_set_config", 00:06:47.985 "params": { 00:06:47.985 "discovery_filter": "match_any", 00:06:47.985 "admin_cmd_passthru": { 00:06:47.985 "identify_ctrlr": false 00:06:47.985 }, 00:06:47.985 "dhchap_digests": [ 00:06:47.985 "sha256", 00:06:47.985 "sha384", 00:06:47.985 "sha512" 00:06:47.985 ], 00:06:47.985 "dhchap_dhgroups": [ 00:06:47.985 "null", 00:06:47.985 "ffdhe2048", 00:06:47.985 "ffdhe3072", 00:06:47.985 "ffdhe4096", 00:06:47.985 "ffdhe6144", 00:06:47.985 "ffdhe8192" 00:06:47.985 ] 00:06:47.985 } 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "method": "nvmf_set_max_subsystems", 00:06:47.985 "params": { 00:06:47.985 "max_subsystems": 1024 00:06:47.985 } 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "method": "nvmf_set_crdt", 00:06:47.985 "params": { 00:06:47.985 "crdt1": 0, 00:06:47.985 "crdt2": 0, 00:06:47.985 "crdt3": 0 00:06:47.985 } 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "method": "nvmf_create_transport", 00:06:47.985 "params": { 00:06:47.985 "trtype": "TCP", 00:06:47.985 "max_queue_depth": 128, 00:06:47.985 "max_io_qpairs_per_ctrlr": 127, 00:06:47.985 "in_capsule_data_size": 4096, 00:06:47.985 "max_io_size": 131072, 00:06:47.985 "io_unit_size": 131072, 00:06:47.985 "max_aq_depth": 128, 00:06:47.985 "num_shared_buffers": 511, 00:06:47.985 "buf_cache_size": 4294967295, 00:06:47.985 "dif_insert_or_strip": false, 00:06:47.985 "zcopy": false, 00:06:47.985 "c2h_success": true, 00:06:47.985 "sock_priority": 0, 00:06:47.985 "abort_timeout_sec": 1, 00:06:47.985 "ack_timeout": 0, 00:06:47.985 "data_wr_pool_size": 0 00:06:47.985 } 00:06:47.985 } 00:06:47.985 ] 00:06:47.985 }, 00:06:47.985 { 00:06:47.985 "subsystem": "iscsi", 00:06:47.985 "config": [ 00:06:47.985 { 00:06:47.985 "method": "iscsi_set_options", 00:06:47.985 "params": { 00:06:47.985 "node_base": "iqn.2016-06.io.spdk", 00:06:47.985 "max_sessions": 128, 00:06:47.985 "max_connections_per_session": 2, 00:06:47.985 "max_queue_depth": 64, 00:06:47.985 "default_time2wait": 2, 00:06:47.985 "default_time2retain": 20, 00:06:47.985 "first_burst_length": 8192, 00:06:47.985 "immediate_data": true, 00:06:47.985 "allow_duplicated_isid": false, 00:06:47.985 "error_recovery_level": 0, 00:06:47.985 "nop_timeout": 60, 00:06:47.985 "nop_in_interval": 30, 00:06:47.985 "disable_chap": false, 00:06:47.985 "require_chap": false, 00:06:47.985 "mutual_chap": false, 00:06:47.985 "chap_group": 0, 00:06:47.985 "max_large_datain_per_connection": 64, 00:06:47.985 "max_r2t_per_connection": 4, 00:06:47.985 "pdu_pool_size": 36864, 00:06:47.985 "immediate_data_pool_size": 16384, 00:06:47.985 "data_out_pool_size": 2048 00:06:47.985 } 00:06:47.985 } 00:06:47.985 ] 00:06:47.985 } 00:06:47.985 ] 00:06:47.985 } 00:06:47.985 14:26:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:47.985 14:26:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3658691 00:06:47.985 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3658691 ']' 00:06:47.985 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3658691 00:06:47.985 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:47.985 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.985 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3658691 00:06:48.244 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.244 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.244 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3658691' 00:06:48.244 killing process with pid 3658691 00:06:48.244 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3658691 00:06:48.244 14:26:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3658691 00:06:48.244 14:26:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3658766 00:06:48.244 14:26:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:48.244 14:26:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3658766 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3658766 ']' 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3658766 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3658766 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3658766' 00:06:53.514 killing process with pid 3658766 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3658766 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3658766 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:53.514 00:06:53.514 real 0m5.945s 00:06:53.514 user 0m5.719s 00:06:53.514 sys 0m0.456s 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.514 ************************************ 00:06:53.514 END TEST skip_rpc_with_json 00:06:53.514 ************************************ 00:06:53.514 14:27:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:53.514 14:27:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.514 14:27:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.514 14:27:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.514 ************************************ 00:06:53.514 START TEST skip_rpc_with_delay 00:06:53.514 ************************************ 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.514 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.515 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.515 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.515 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.515 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.515 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.515 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.515 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:53.515 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.773 [2024-11-20 14:27:00.579167] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:53.773 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:53.773 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.773 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.773 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.773 00:06:53.773 real 0m0.057s 00:06:53.773 user 0m0.041s 00:06:53.773 sys 0m0.015s 00:06:53.773 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.773 14:27:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:53.773 ************************************ 00:06:53.773 END TEST skip_rpc_with_delay 00:06:53.773 ************************************ 00:06:53.773 14:27:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:53.773 14:27:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:53.773 14:27:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:53.773 14:27:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.773 14:27:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.773 14:27:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.773 ************************************ 00:06:53.773 START TEST exit_on_failed_rpc_init 00:06:53.773 ************************************ 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3660142 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3660142 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3660142 ']' 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.773 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:53.773 [2024-11-20 14:27:00.684406] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:06:53.773 [2024-11-20 14:27:00.684457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660142 ] 00:06:53.773 [2024-11-20 14:27:00.751661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.773 [2024-11-20 14:27:00.785252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:54.032 14:27:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:54.032 [2024-11-20 14:27:00.995326] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:06:54.032 [2024-11-20 14:27:00.995381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660174 ] 00:06:54.032 [2024-11-20 14:27:01.073011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.290 [2024-11-20 14:27:01.108944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.290 [2024-11-20 14:27:01.108994] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:54.290 [2024-11-20 14:27:01.109005] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:54.290 [2024-11-20 14:27:01.109012] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3660142 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3660142 ']' 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3660142 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660142 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660142' 00:06:54.290 killing process with pid 3660142 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3660142 00:06:54.290 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3660142 00:06:54.548 00:06:54.548 real 0m0.733s 00:06:54.548 user 0m0.809s 00:06:54.548 sys 0m0.307s 00:06:54.548 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.548 14:27:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:54.548 ************************************ 00:06:54.548 END TEST exit_on_failed_rpc_init 00:06:54.548 ************************************ 00:06:54.548 14:27:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:54.548 00:06:54.548 real 0m12.308s 00:06:54.548 user 0m11.708s 00:06:54.548 sys 0m1.260s 00:06:54.548 14:27:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.548 14:27:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.548 ************************************ 00:06:54.548 END TEST skip_rpc 00:06:54.548 ************************************ 00:06:54.548 14:27:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:54.548 14:27:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.548 14:27:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.548 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.548 ************************************ 00:06:54.548 START TEST rpc_client 00:06:54.548 ************************************ 00:06:54.548 14:27:01 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:54.548 * Looking for test storage... 00:06:54.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:54.548 14:27:01 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.548 14:27:01 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.548 14:27:01 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.548 14:27:01 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.548 14:27:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:54.548 14:27:01 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.548 14:27:01 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.548 --rc genhtml_branch_coverage=1 00:06:54.548 --rc genhtml_function_coverage=1 00:06:54.548 --rc genhtml_legend=1 00:06:54.548 --rc geninfo_all_blocks=1 00:06:54.548 --rc geninfo_unexecuted_blocks=1 00:06:54.548 00:06:54.548 ' 00:06:54.548 14:27:01 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.548 --rc genhtml_branch_coverage=1 00:06:54.548 --rc genhtml_function_coverage=1 00:06:54.548 --rc genhtml_legend=1 00:06:54.548 --rc geninfo_all_blocks=1 00:06:54.548 --rc geninfo_unexecuted_blocks=1 00:06:54.548 00:06:54.548 ' 00:06:54.548 14:27:01 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.549 --rc genhtml_branch_coverage=1 00:06:54.549 --rc genhtml_function_coverage=1 00:06:54.549 --rc genhtml_legend=1 00:06:54.549 --rc geninfo_all_blocks=1 00:06:54.549 --rc geninfo_unexecuted_blocks=1 00:06:54.549 00:06:54.549 ' 00:06:54.549 14:27:01 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.549 --rc genhtml_branch_coverage=1 00:06:54.549 --rc genhtml_function_coverage=1 00:06:54.549 --rc genhtml_legend=1 00:06:54.549 --rc geninfo_all_blocks=1 00:06:54.549 --rc geninfo_unexecuted_blocks=1 00:06:54.549 00:06:54.549 ' 00:06:54.549 14:27:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:54.549 OK 00:06:54.549 14:27:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:54.549 00:06:54.549 real 0m0.137s 00:06:54.549 user 0m0.091s 00:06:54.549 sys 0m0.052s 00:06:54.549 14:27:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.549 14:27:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:54.549 ************************************ 00:06:54.549 END TEST rpc_client 00:06:54.549 ************************************ 00:06:54.809 14:27:01 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:54.809 14:27:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.809 14:27:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.809 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.809 ************************************ 00:06:54.809 START TEST json_config 00:06:54.809 ************************************ 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.809 14:27:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.809 14:27:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.809 14:27:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.809 14:27:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.809 14:27:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.809 14:27:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.809 14:27:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.809 14:27:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.809 14:27:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.809 14:27:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.809 14:27:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.809 14:27:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:54.809 14:27:01 json_config -- scripts/common.sh@345 -- # : 1 00:06:54.809 14:27:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.809 14:27:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.809 14:27:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:54.809 14:27:01 json_config -- scripts/common.sh@353 -- # local d=1 00:06:54.809 14:27:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.809 14:27:01 json_config -- scripts/common.sh@355 -- # echo 1 00:06:54.809 14:27:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.809 14:27:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:54.809 14:27:01 json_config -- scripts/common.sh@353 -- # local d=2 00:06:54.809 14:27:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.809 14:27:01 json_config -- scripts/common.sh@355 -- # echo 2 00:06:54.809 14:27:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.809 14:27:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.809 14:27:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.809 14:27:01 json_config -- scripts/common.sh@368 -- # return 0 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.809 --rc genhtml_branch_coverage=1 00:06:54.809 --rc genhtml_function_coverage=1 00:06:54.809 --rc genhtml_legend=1 00:06:54.809 --rc geninfo_all_blocks=1 00:06:54.809 --rc geninfo_unexecuted_blocks=1 00:06:54.809 00:06:54.809 ' 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.809 --rc genhtml_branch_coverage=1 00:06:54.809 --rc genhtml_function_coverage=1 00:06:54.809 --rc genhtml_legend=1 00:06:54.809 --rc geninfo_all_blocks=1 00:06:54.809 --rc geninfo_unexecuted_blocks=1 00:06:54.809 00:06:54.809 ' 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.809 --rc genhtml_branch_coverage=1 00:06:54.809 --rc genhtml_function_coverage=1 00:06:54.809 --rc genhtml_legend=1 00:06:54.809 --rc geninfo_all_blocks=1 00:06:54.809 --rc geninfo_unexecuted_blocks=1 00:06:54.809 00:06:54.809 ' 00:06:54.809 14:27:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.809 --rc genhtml_branch_coverage=1 00:06:54.809 --rc genhtml_function_coverage=1 00:06:54.809 --rc genhtml_legend=1 00:06:54.809 --rc geninfo_all_blocks=1 00:06:54.809 --rc geninfo_unexecuted_blocks=1 00:06:54.809 00:06:54.809 ' 00:06:54.809 14:27:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.809 14:27:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.809 14:27:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.809 14:27:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.809 14:27:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.809 14:27:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.809 14:27:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.809 14:27:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.809 14:27:01 json_config -- paths/export.sh@5 -- # export PATH 00:06:54.809 14:27:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@51 -- # : 0 00:06:54.809 14:27:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:54.810 14:27:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:54.810 14:27:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.810 14:27:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.810 14:27:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.810 14:27:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:54.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:54.810 14:27:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:54.810 14:27:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:54.810 14:27:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:54.810 INFO: JSON configuration test init 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.810 14:27:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:54.810 14:27:01 json_config -- json_config/common.sh@9 -- # local app=target 00:06:54.810 14:27:01 json_config -- json_config/common.sh@10 -- # shift 00:06:54.810 14:27:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:54.810 14:27:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:54.810 14:27:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:54.810 14:27:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:54.810 14:27:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:54.810 14:27:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3660647 00:06:54.810 14:27:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:54.810 Waiting for target to run... 00:06:54.810 14:27:01 json_config -- json_config/common.sh@25 -- # waitforlisten 3660647 /var/tmp/spdk_tgt.sock 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 3660647 ']' 00:06:54.810 14:27:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:54.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.810 14:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.810 [2024-11-20 14:27:01.835587] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:06:54.810 [2024-11-20 14:27:01.835655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660647 ] 00:06:55.377 [2024-11-20 14:27:02.248186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.377 [2024-11-20 14:27:02.279590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.636 14:27:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.636 14:27:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:55.636 14:27:02 json_config -- json_config/common.sh@26 -- # echo '' 00:06:55.636 00:06:55.636 14:27:02 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:55.636 14:27:02 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:55.636 14:27:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.636 14:27:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.636 14:27:02 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:55.636 14:27:02 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:55.636 14:27:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.636 14:27:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.636 14:27:02 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:55.636 14:27:02 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:55.636 14:27:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:56.203 14:27:03 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:56.203 14:27:03 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:56.203 14:27:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.203 14:27:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.203 14:27:03 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:56.203 14:27:03 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:56.203 14:27:03 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:56.203 14:27:03 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:56.203 14:27:03 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:56.204 14:27:03 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:56.204 14:27:03 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:56.204 14:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@54 -- # sort 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:56.462 14:27:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.462 14:27:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:56.462 14:27:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.462 14:27:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:56.462 14:27:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:56.462 14:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:56.721 MallocForNvmf0 00:06:56.721 14:27:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:56.721 14:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:56.721 MallocForNvmf1 00:06:56.721 14:27:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:56.721 14:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:56.980 [2024-11-20 14:27:03.847533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.980 14:27:03 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:56.980 14:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:56.980 14:27:04 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:56.980 14:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:57.238 14:27:04 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:57.238 14:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:57.497 14:27:04 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:57.497 14:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:57.497 [2024-11-20 14:27:04.477478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:57.497 14:27:04 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:57.497 14:27:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.497 14:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.497 14:27:04 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:57.497 14:27:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.497 14:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.497 14:27:04 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:57.497 14:27:04 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:57.497 14:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:57.755 MallocBdevForConfigChangeCheck 00:06:57.755 14:27:04 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:57.755 14:27:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.755 14:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.755 14:27:04 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:57.755 14:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.013 14:27:05 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:58.013 INFO: shutting down applications... 00:06:58.013 14:27:05 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:58.013 14:27:05 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:58.013 14:27:05 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:58.013 14:27:05 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:58.582 Calling clear_iscsi_subsystem 00:06:58.582 Calling clear_nvmf_subsystem 00:06:58.582 Calling clear_nbd_subsystem 00:06:58.582 Calling clear_ublk_subsystem 00:06:58.582 Calling clear_vhost_blk_subsystem 00:06:58.582 Calling clear_vhost_scsi_subsystem 00:06:58.582 Calling clear_bdev_subsystem 00:06:58.582 14:27:05 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:58.582 14:27:05 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:58.582 14:27:05 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:58.582 14:27:05 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:58.582 14:27:05 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.582 14:27:05 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:58.841 14:27:05 json_config -- json_config/json_config.sh@352 -- # break 00:06:58.841 14:27:05 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:58.841 14:27:05 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:58.841 14:27:05 json_config -- json_config/common.sh@31 -- # local app=target 00:06:58.841 14:27:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:58.841 14:27:05 json_config -- json_config/common.sh@35 -- # [[ -n 3660647 ]] 00:06:58.841 14:27:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3660647 00:06:58.841 14:27:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:58.841 14:27:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.841 14:27:05 json_config -- json_config/common.sh@41 -- # kill -0 3660647 00:06:58.841 14:27:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:59.410 14:27:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:59.410 14:27:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:59.410 14:27:06 json_config -- json_config/common.sh@41 -- # kill -0 3660647 00:06:59.410 14:27:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:59.410 14:27:06 json_config -- json_config/common.sh@43 -- # break 00:06:59.410 14:27:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:59.410 14:27:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:59.410 SPDK target shutdown done 00:06:59.410 14:27:06 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:59.410 INFO: relaunching applications... 00:06:59.410 14:27:06 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:59.410 14:27:06 json_config -- json_config/common.sh@9 -- # local app=target 00:06:59.410 14:27:06 json_config -- json_config/common.sh@10 -- # shift 00:06:59.410 14:27:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:59.410 14:27:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:59.410 14:27:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:59.410 14:27:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:59.410 14:27:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:59.410 14:27:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3661738 00:06:59.410 14:27:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:59.410 Waiting for target to run... 00:06:59.410 14:27:06 json_config -- json_config/common.sh@25 -- # waitforlisten 3661738 /var/tmp/spdk_tgt.sock 00:06:59.410 14:27:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:59.410 14:27:06 json_config -- common/autotest_common.sh@835 -- # '[' -z 3661738 ']' 00:06:59.410 14:27:06 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:59.410 14:27:06 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.410 14:27:06 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:59.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:59.410 14:27:06 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.410 14:27:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.410 [2024-11-20 14:27:06.261103] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:06:59.410 [2024-11-20 14:27:06.261162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661738 ] 00:06:59.669 [2024-11-20 14:27:06.530187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.669 [2024-11-20 14:27:06.554289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.237 [2024-11-20 14:27:07.057641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.238 [2024-11-20 14:27:07.090008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:00.238 14:27:07 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.238 14:27:07 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:00.238 14:27:07 json_config -- json_config/common.sh@26 -- # echo '' 00:07:00.238 00:07:00.238 14:27:07 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:00.238 14:27:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:00.238 INFO: Checking if target configuration is the same... 00:07:00.238 14:27:07 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:00.238 14:27:07 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:00.238 14:27:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:00.238 + '[' 2 -ne 2 ']' 00:07:00.238 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:00.238 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:00.238 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:00.238 +++ basename /dev/fd/62 00:07:00.238 ++ mktemp /tmp/62.XXX 00:07:00.238 + tmp_file_1=/tmp/62.roq 00:07:00.238 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:00.238 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:00.238 + tmp_file_2=/tmp/spdk_tgt_config.json.lUi 00:07:00.238 + ret=0 00:07:00.238 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:00.496 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:00.496 + diff -u /tmp/62.roq /tmp/spdk_tgt_config.json.lUi 00:07:00.496 + echo 'INFO: JSON config files are the same' 00:07:00.496 INFO: JSON config files are the same 00:07:00.496 + rm /tmp/62.roq /tmp/spdk_tgt_config.json.lUi 00:07:00.496 + exit 0 00:07:00.496 14:27:07 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:00.496 14:27:07 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:00.496 INFO: changing configuration and checking if this can be detected... 00:07:00.496 14:27:07 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:00.496 14:27:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:00.755 14:27:07 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:00.755 14:27:07 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:00.755 14:27:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:00.755 + '[' 2 -ne 2 ']' 00:07:00.755 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:00.755 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:00.755 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:00.755 +++ basename /dev/fd/62 00:07:00.755 ++ mktemp /tmp/62.XXX 00:07:00.755 + tmp_file_1=/tmp/62.32v 00:07:00.755 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:00.755 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:00.755 + tmp_file_2=/tmp/spdk_tgt_config.json.JVW 00:07:00.755 + ret=0 00:07:00.755 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:01.015 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:01.015 + diff -u /tmp/62.32v /tmp/spdk_tgt_config.json.JVW 00:07:01.015 + ret=1 00:07:01.015 + echo '=== Start of file: /tmp/62.32v ===' 00:07:01.015 + cat /tmp/62.32v 00:07:01.015 + echo '=== End of file: /tmp/62.32v ===' 00:07:01.015 + echo '' 00:07:01.015 + echo '=== Start of file: /tmp/spdk_tgt_config.json.JVW ===' 00:07:01.015 + cat /tmp/spdk_tgt_config.json.JVW 00:07:01.015 + echo '=== End of file: /tmp/spdk_tgt_config.json.JVW ===' 00:07:01.015 + echo '' 00:07:01.015 + rm /tmp/62.32v /tmp/spdk_tgt_config.json.JVW 00:07:01.015 + exit 1 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:01.015 INFO: configuration change detected. 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:01.015 14:27:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.015 14:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@324 -- # [[ -n 3661738 ]] 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:01.015 14:27:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.015 14:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:01.015 14:27:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.015 14:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.015 14:27:07 json_config -- json_config/json_config.sh@330 -- # killprocess 3661738 00:07:01.015 14:27:07 json_config -- common/autotest_common.sh@954 -- # '[' -z 3661738 ']' 00:07:01.015 14:27:07 json_config -- common/autotest_common.sh@958 -- # kill -0 3661738 00:07:01.015 14:27:07 json_config -- common/autotest_common.sh@959 -- # uname 00:07:01.015 14:27:08 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.015 14:27:08 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3661738 00:07:01.015 14:27:08 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.015 14:27:08 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.015 14:27:08 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3661738' 00:07:01.015 killing process with pid 3661738 00:07:01.015 14:27:08 json_config -- common/autotest_common.sh@973 -- # kill 3661738 00:07:01.015 14:27:08 json_config -- common/autotest_common.sh@978 -- # wait 3661738 00:07:01.274 14:27:08 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:01.274 14:27:08 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:01.274 14:27:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.274 14:27:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.274 14:27:08 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:01.274 14:27:08 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:01.274 INFO: Success 00:07:01.274 00:07:01.274 real 0m6.664s 00:07:01.274 user 0m7.765s 00:07:01.274 sys 0m1.693s 00:07:01.274 14:27:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.274 14:27:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.274 ************************************ 00:07:01.274 END TEST json_config 00:07:01.274 ************************************ 00:07:01.275 14:27:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:01.275 14:27:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.275 14:27:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.275 14:27:08 -- common/autotest_common.sh@10 -- # set +x 00:07:01.534 ************************************ 00:07:01.534 START TEST json_config_extra_key 00:07:01.534 ************************************ 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.534 --rc genhtml_branch_coverage=1 00:07:01.534 --rc genhtml_function_coverage=1 00:07:01.534 --rc genhtml_legend=1 00:07:01.534 --rc geninfo_all_blocks=1 00:07:01.534 --rc geninfo_unexecuted_blocks=1 00:07:01.534 00:07:01.534 ' 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.534 --rc genhtml_branch_coverage=1 00:07:01.534 --rc genhtml_function_coverage=1 00:07:01.534 --rc genhtml_legend=1 00:07:01.534 --rc geninfo_all_blocks=1 00:07:01.534 --rc geninfo_unexecuted_blocks=1 00:07:01.534 00:07:01.534 ' 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.534 --rc genhtml_branch_coverage=1 00:07:01.534 --rc genhtml_function_coverage=1 00:07:01.534 --rc genhtml_legend=1 00:07:01.534 --rc geninfo_all_blocks=1 00:07:01.534 --rc geninfo_unexecuted_blocks=1 00:07:01.534 00:07:01.534 ' 00:07:01.534 14:27:08 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.534 --rc genhtml_branch_coverage=1 00:07:01.534 --rc genhtml_function_coverage=1 00:07:01.534 --rc genhtml_legend=1 00:07:01.534 --rc geninfo_all_blocks=1 00:07:01.534 --rc geninfo_unexecuted_blocks=1 00:07:01.534 00:07:01.534 ' 00:07:01.534 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.534 14:27:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.534 14:27:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.535 14:27:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.535 14:27:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.535 14:27:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.535 14:27:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.535 14:27:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.535 14:27:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.535 14:27:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:01.535 14:27:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.535 14:27:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:01.535 INFO: launching applications... 00:07:01.535 14:27:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3662701 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:01.535 Waiting for target to run... 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3662701 /var/tmp/spdk_tgt.sock 00:07:01.535 14:27:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3662701 ']' 00:07:01.535 14:27:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:01.535 14:27:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.535 14:27:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:01.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:01.535 14:27:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.535 14:27:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:01.535 14:27:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:01.535 [2024-11-20 14:27:08.528534] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:01.535 [2024-11-20 14:27:08.528611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662701 ] 00:07:01.793 [2024-11-20 14:27:08.795601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.793 [2024-11-20 14:27:08.818522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.366 14:27:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.366 14:27:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:02.366 14:27:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:02.366 00:07:02.366 14:27:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:02.366 INFO: shutting down applications... 00:07:02.366 14:27:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:02.366 14:27:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:02.366 14:27:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:02.366 14:27:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3662701 ]] 00:07:02.366 14:27:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3662701 00:07:02.366 14:27:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:02.366 14:27:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:02.366 14:27:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3662701 00:07:02.366 14:27:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:02.939 14:27:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:02.939 14:27:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:02.939 14:27:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3662701 00:07:02.939 14:27:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:02.939 14:27:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:02.939 14:27:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:02.939 14:27:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:02.939 SPDK target shutdown done 00:07:02.939 14:27:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:02.939 Success 00:07:02.939 00:07:02.939 real 0m1.454s 00:07:02.939 user 0m1.096s 00:07:02.939 sys 0m0.343s 00:07:02.939 14:27:09 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.939 14:27:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:02.939 ************************************ 00:07:02.939 END TEST json_config_extra_key 00:07:02.939 ************************************ 00:07:02.939 14:27:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:02.939 14:27:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.939 14:27:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.939 14:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.939 ************************************ 00:07:02.939 START TEST alias_rpc 00:07:02.939 ************************************ 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:02.939 * Looking for test storage... 00:07:02.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.939 14:27:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.939 --rc genhtml_branch_coverage=1 00:07:02.939 --rc genhtml_function_coverage=1 00:07:02.939 --rc genhtml_legend=1 00:07:02.939 --rc geninfo_all_blocks=1 00:07:02.939 --rc geninfo_unexecuted_blocks=1 00:07:02.939 00:07:02.939 ' 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.939 --rc genhtml_branch_coverage=1 00:07:02.939 --rc genhtml_function_coverage=1 00:07:02.939 --rc genhtml_legend=1 00:07:02.939 --rc geninfo_all_blocks=1 00:07:02.939 --rc geninfo_unexecuted_blocks=1 00:07:02.939 00:07:02.939 ' 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.939 --rc genhtml_branch_coverage=1 00:07:02.939 --rc genhtml_function_coverage=1 00:07:02.939 --rc genhtml_legend=1 00:07:02.939 --rc geninfo_all_blocks=1 00:07:02.939 --rc geninfo_unexecuted_blocks=1 00:07:02.939 00:07:02.939 ' 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.939 --rc genhtml_branch_coverage=1 00:07:02.939 --rc genhtml_function_coverage=1 00:07:02.939 --rc genhtml_legend=1 00:07:02.939 --rc geninfo_all_blocks=1 00:07:02.939 --rc geninfo_unexecuted_blocks=1 00:07:02.939 00:07:02.939 ' 00:07:02.939 14:27:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:02.939 14:27:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3663092 00:07:02.939 14:27:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3663092 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3663092 ']' 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.939 14:27:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.939 14:27:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.201 [2024-11-20 14:27:10.023733] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:03.201 [2024-11-20 14:27:10.023800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663092 ] 00:07:03.201 [2024-11-20 14:27:10.094767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.201 [2024-11-20 14:27:10.132837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.770 14:27:10 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.770 14:27:10 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.770 14:27:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:04.029 14:27:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3663092 00:07:04.029 14:27:10 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3663092 ']' 00:07:04.029 14:27:10 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3663092 00:07:04.029 14:27:10 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:04.029 14:27:10 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.029 14:27:10 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663092 00:07:04.029 14:27:11 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.029 14:27:11 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.029 14:27:11 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663092' 00:07:04.029 killing process with pid 3663092 00:07:04.029 14:27:11 alias_rpc -- common/autotest_common.sh@973 -- # kill 3663092 00:07:04.029 14:27:11 alias_rpc -- common/autotest_common.sh@978 -- # wait 3663092 00:07:04.288 00:07:04.288 real 0m1.359s 00:07:04.288 user 0m1.501s 00:07:04.288 sys 0m0.352s 00:07:04.288 14:27:11 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.288 14:27:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.288 ************************************ 00:07:04.288 END TEST alias_rpc 00:07:04.288 ************************************ 00:07:04.288 14:27:11 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:04.288 14:27:11 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:04.288 14:27:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.288 14:27:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.288 14:27:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.288 ************************************ 00:07:04.288 START TEST spdkcli_tcp 00:07:04.288 ************************************ 00:07:04.288 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:04.288 * Looking for test storage... 00:07:04.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:04.288 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.288 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.288 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.547 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:04.547 14:27:11 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.548 14:27:11 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.548 --rc genhtml_branch_coverage=1 00:07:04.548 --rc genhtml_function_coverage=1 00:07:04.548 --rc genhtml_legend=1 00:07:04.548 --rc geninfo_all_blocks=1 00:07:04.548 --rc geninfo_unexecuted_blocks=1 00:07:04.548 00:07:04.548 ' 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.548 --rc genhtml_branch_coverage=1 00:07:04.548 --rc genhtml_function_coverage=1 00:07:04.548 --rc genhtml_legend=1 00:07:04.548 --rc geninfo_all_blocks=1 00:07:04.548 --rc geninfo_unexecuted_blocks=1 00:07:04.548 00:07:04.548 ' 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.548 --rc genhtml_branch_coverage=1 00:07:04.548 --rc genhtml_function_coverage=1 00:07:04.548 --rc genhtml_legend=1 00:07:04.548 --rc geninfo_all_blocks=1 00:07:04.548 --rc geninfo_unexecuted_blocks=1 00:07:04.548 00:07:04.548 ' 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.548 --rc genhtml_branch_coverage=1 00:07:04.548 --rc genhtml_function_coverage=1 00:07:04.548 --rc genhtml_legend=1 00:07:04.548 --rc geninfo_all_blocks=1 00:07:04.548 --rc geninfo_unexecuted_blocks=1 00:07:04.548 00:07:04.548 ' 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3663488 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3663488 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3663488 ']' 00:07:04.548 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.548 14:27:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.548 [2024-11-20 14:27:11.445108] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:04.548 [2024-11-20 14:27:11.445174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663488 ] 00:07:04.548 [2024-11-20 14:27:11.511328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.548 [2024-11-20 14:27:11.542583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.548 [2024-11-20 14:27:11.542585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.808 14:27:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.808 14:27:11 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:04.808 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3663497 00:07:04.808 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:04.808 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:04.808 [ 00:07:04.808 "bdev_malloc_delete", 00:07:04.808 "bdev_malloc_create", 00:07:04.808 "bdev_null_resize", 00:07:04.808 "bdev_null_delete", 00:07:04.808 "bdev_null_create", 00:07:04.808 "bdev_nvme_cuse_unregister", 00:07:04.808 "bdev_nvme_cuse_register", 00:07:04.808 "bdev_opal_new_user", 00:07:04.808 "bdev_opal_set_lock_state", 00:07:04.808 "bdev_opal_delete", 00:07:04.808 "bdev_opal_get_info", 00:07:04.808 "bdev_opal_create", 00:07:04.808 "bdev_nvme_opal_revert", 00:07:04.808 "bdev_nvme_opal_init", 00:07:04.808 "bdev_nvme_send_cmd", 00:07:04.808 "bdev_nvme_set_keys", 00:07:04.808 "bdev_nvme_get_path_iostat", 00:07:04.808 "bdev_nvme_get_mdns_discovery_info", 00:07:04.808 "bdev_nvme_stop_mdns_discovery", 00:07:04.808 "bdev_nvme_start_mdns_discovery", 00:07:04.808 "bdev_nvme_set_multipath_policy", 00:07:04.808 "bdev_nvme_set_preferred_path", 00:07:04.808 "bdev_nvme_get_io_paths", 00:07:04.808 "bdev_nvme_remove_error_injection", 00:07:04.808 "bdev_nvme_add_error_injection", 00:07:04.808 "bdev_nvme_get_discovery_info", 00:07:04.808 "bdev_nvme_stop_discovery", 00:07:04.808 "bdev_nvme_start_discovery", 00:07:04.808 "bdev_nvme_get_controller_health_info", 00:07:04.808 "bdev_nvme_disable_controller", 00:07:04.808 "bdev_nvme_enable_controller", 00:07:04.808 "bdev_nvme_reset_controller", 00:07:04.808 "bdev_nvme_get_transport_statistics", 00:07:04.808 "bdev_nvme_apply_firmware", 00:07:04.808 "bdev_nvme_detach_controller", 00:07:04.808 "bdev_nvme_get_controllers", 00:07:04.808 "bdev_nvme_attach_controller", 00:07:04.808 "bdev_nvme_set_hotplug", 00:07:04.808 "bdev_nvme_set_options", 00:07:04.808 "bdev_passthru_delete", 00:07:04.808 "bdev_passthru_create", 00:07:04.808 "bdev_lvol_set_parent_bdev", 00:07:04.808 "bdev_lvol_set_parent", 00:07:04.808 "bdev_lvol_check_shallow_copy", 00:07:04.808 "bdev_lvol_start_shallow_copy", 00:07:04.808 "bdev_lvol_grow_lvstore", 00:07:04.808 "bdev_lvol_get_lvols", 00:07:04.808 "bdev_lvol_get_lvstores", 00:07:04.808 "bdev_lvol_delete", 00:07:04.808 "bdev_lvol_set_read_only", 00:07:04.808 "bdev_lvol_resize", 00:07:04.808 "bdev_lvol_decouple_parent", 00:07:04.808 "bdev_lvol_inflate", 00:07:04.808 "bdev_lvol_rename", 00:07:04.808 "bdev_lvol_clone_bdev", 00:07:04.808 "bdev_lvol_clone", 00:07:04.808 "bdev_lvol_snapshot", 00:07:04.808 "bdev_lvol_create", 00:07:04.808 "bdev_lvol_delete_lvstore", 00:07:04.808 "bdev_lvol_rename_lvstore", 00:07:04.808 "bdev_lvol_create_lvstore", 00:07:04.808 "bdev_raid_set_options", 00:07:04.808 "bdev_raid_remove_base_bdev", 00:07:04.808 "bdev_raid_add_base_bdev", 00:07:04.808 "bdev_raid_delete", 00:07:04.808 "bdev_raid_create", 00:07:04.808 "bdev_raid_get_bdevs", 00:07:04.808 "bdev_error_inject_error", 00:07:04.808 "bdev_error_delete", 00:07:04.808 "bdev_error_create", 00:07:04.808 "bdev_split_delete", 00:07:04.808 "bdev_split_create", 00:07:04.808 "bdev_delay_delete", 00:07:04.808 "bdev_delay_create", 00:07:04.808 "bdev_delay_update_latency", 00:07:04.808 "bdev_zone_block_delete", 00:07:04.808 "bdev_zone_block_create", 00:07:04.808 "blobfs_create", 00:07:04.808 "blobfs_detect", 00:07:04.808 "blobfs_set_cache_size", 00:07:04.808 "bdev_aio_delete", 00:07:04.808 "bdev_aio_rescan", 00:07:04.808 "bdev_aio_create", 00:07:04.808 "bdev_ftl_set_property", 00:07:04.808 "bdev_ftl_get_properties", 00:07:04.808 "bdev_ftl_get_stats", 00:07:04.808 "bdev_ftl_unmap", 00:07:04.808 "bdev_ftl_unload", 00:07:04.808 "bdev_ftl_delete", 00:07:04.808 "bdev_ftl_load", 00:07:04.808 "bdev_ftl_create", 00:07:04.808 "bdev_virtio_attach_controller", 00:07:04.808 "bdev_virtio_scsi_get_devices", 00:07:04.808 "bdev_virtio_detach_controller", 00:07:04.808 "bdev_virtio_blk_set_hotplug", 00:07:04.808 "bdev_iscsi_delete", 00:07:04.808 "bdev_iscsi_create", 00:07:04.808 "bdev_iscsi_set_options", 00:07:04.808 "accel_error_inject_error", 00:07:04.808 "ioat_scan_accel_module", 00:07:04.808 "dsa_scan_accel_module", 00:07:04.808 "iaa_scan_accel_module", 00:07:04.808 "vfu_virtio_create_fs_endpoint", 00:07:04.808 "vfu_virtio_create_scsi_endpoint", 00:07:04.808 "vfu_virtio_scsi_remove_target", 00:07:04.808 "vfu_virtio_scsi_add_target", 00:07:04.808 "vfu_virtio_create_blk_endpoint", 00:07:04.808 "vfu_virtio_delete_endpoint", 00:07:04.808 "keyring_file_remove_key", 00:07:04.808 "keyring_file_add_key", 00:07:04.808 "keyring_linux_set_options", 00:07:04.808 "fsdev_aio_delete", 00:07:04.808 "fsdev_aio_create", 00:07:04.808 "iscsi_get_histogram", 00:07:04.808 "iscsi_enable_histogram", 00:07:04.808 "iscsi_set_options", 00:07:04.808 "iscsi_get_auth_groups", 00:07:04.808 "iscsi_auth_group_remove_secret", 00:07:04.808 "iscsi_auth_group_add_secret", 00:07:04.808 "iscsi_delete_auth_group", 00:07:04.808 "iscsi_create_auth_group", 00:07:04.808 "iscsi_set_discovery_auth", 00:07:04.808 "iscsi_get_options", 00:07:04.808 "iscsi_target_node_request_logout", 00:07:04.808 "iscsi_target_node_set_redirect", 00:07:04.808 "iscsi_target_node_set_auth", 00:07:04.808 "iscsi_target_node_add_lun", 00:07:04.808 "iscsi_get_stats", 00:07:04.808 "iscsi_get_connections", 00:07:04.808 "iscsi_portal_group_set_auth", 00:07:04.808 "iscsi_start_portal_group", 00:07:04.808 "iscsi_delete_portal_group", 00:07:04.808 "iscsi_create_portal_group", 00:07:04.808 "iscsi_get_portal_groups", 00:07:04.808 "iscsi_delete_target_node", 00:07:04.808 "iscsi_target_node_remove_pg_ig_maps", 00:07:04.808 "iscsi_target_node_add_pg_ig_maps", 00:07:04.808 "iscsi_create_target_node", 00:07:04.808 "iscsi_get_target_nodes", 00:07:04.808 "iscsi_delete_initiator_group", 00:07:04.808 "iscsi_initiator_group_remove_initiators", 00:07:04.808 "iscsi_initiator_group_add_initiators", 00:07:04.808 "iscsi_create_initiator_group", 00:07:04.808 "iscsi_get_initiator_groups", 00:07:04.808 "nvmf_set_crdt", 00:07:04.808 "nvmf_set_config", 00:07:04.808 "nvmf_set_max_subsystems", 00:07:04.808 "nvmf_stop_mdns_prr", 00:07:04.808 "nvmf_publish_mdns_prr", 00:07:04.808 "nvmf_subsystem_get_listeners", 00:07:04.808 "nvmf_subsystem_get_qpairs", 00:07:04.808 "nvmf_subsystem_get_controllers", 00:07:04.808 "nvmf_get_stats", 00:07:04.808 "nvmf_get_transports", 00:07:04.808 "nvmf_create_transport", 00:07:04.808 "nvmf_get_targets", 00:07:04.808 "nvmf_delete_target", 00:07:04.808 "nvmf_create_target", 00:07:04.808 "nvmf_subsystem_allow_any_host", 00:07:04.808 "nvmf_subsystem_set_keys", 00:07:04.808 "nvmf_subsystem_remove_host", 00:07:04.808 "nvmf_subsystem_add_host", 00:07:04.808 "nvmf_ns_remove_host", 00:07:04.808 "nvmf_ns_add_host", 00:07:04.808 "nvmf_subsystem_remove_ns", 00:07:04.808 "nvmf_subsystem_set_ns_ana_group", 00:07:04.808 "nvmf_subsystem_add_ns", 00:07:04.808 "nvmf_subsystem_listener_set_ana_state", 00:07:04.808 "nvmf_discovery_get_referrals", 00:07:04.808 "nvmf_discovery_remove_referral", 00:07:04.808 "nvmf_discovery_add_referral", 00:07:04.808 "nvmf_subsystem_remove_listener", 00:07:04.808 "nvmf_subsystem_add_listener", 00:07:04.808 "nvmf_delete_subsystem", 00:07:04.808 "nvmf_create_subsystem", 00:07:04.808 "nvmf_get_subsystems", 00:07:04.809 "env_dpdk_get_mem_stats", 00:07:04.809 "nbd_get_disks", 00:07:04.809 "nbd_stop_disk", 00:07:04.809 "nbd_start_disk", 00:07:04.809 "ublk_recover_disk", 00:07:04.809 "ublk_get_disks", 00:07:04.809 "ublk_stop_disk", 00:07:04.809 "ublk_start_disk", 00:07:04.809 "ublk_destroy_target", 00:07:04.809 "ublk_create_target", 00:07:04.809 "virtio_blk_create_transport", 00:07:04.809 "virtio_blk_get_transports", 00:07:04.809 "vhost_controller_set_coalescing", 00:07:04.809 "vhost_get_controllers", 00:07:04.809 "vhost_delete_controller", 00:07:04.809 "vhost_create_blk_controller", 00:07:04.809 "vhost_scsi_controller_remove_target", 00:07:04.809 "vhost_scsi_controller_add_target", 00:07:04.809 "vhost_start_scsi_controller", 00:07:04.809 "vhost_create_scsi_controller", 00:07:04.809 "thread_set_cpumask", 00:07:04.809 "scheduler_set_options", 00:07:04.809 "framework_get_governor", 00:07:04.809 "framework_get_scheduler", 00:07:04.809 "framework_set_scheduler", 00:07:04.809 "framework_get_reactors", 00:07:04.809 "thread_get_io_channels", 00:07:04.809 "thread_get_pollers", 00:07:04.809 "thread_get_stats", 00:07:04.809 "framework_monitor_context_switch", 00:07:04.809 "spdk_kill_instance", 00:07:04.809 "log_enable_timestamps", 00:07:04.809 "log_get_flags", 00:07:04.809 "log_clear_flag", 00:07:04.809 "log_set_flag", 00:07:04.809 "log_get_level", 00:07:04.809 "log_set_level", 00:07:04.809 "log_get_print_level", 00:07:04.809 "log_set_print_level", 00:07:04.809 "framework_enable_cpumask_locks", 00:07:04.809 "framework_disable_cpumask_locks", 00:07:04.809 "framework_wait_init", 00:07:04.809 "framework_start_init", 00:07:04.809 "scsi_get_devices", 00:07:04.809 "bdev_get_histogram", 00:07:04.809 "bdev_enable_histogram", 00:07:04.809 "bdev_set_qos_limit", 00:07:04.809 "bdev_set_qd_sampling_period", 00:07:04.809 "bdev_get_bdevs", 00:07:04.809 "bdev_reset_iostat", 00:07:04.809 "bdev_get_iostat", 00:07:04.809 "bdev_examine", 00:07:04.809 "bdev_wait_for_examine", 00:07:04.809 "bdev_set_options", 00:07:04.809 "accel_get_stats", 00:07:04.809 "accel_set_options", 00:07:04.809 "accel_set_driver", 00:07:04.809 "accel_crypto_key_destroy", 00:07:04.809 "accel_crypto_keys_get", 00:07:04.809 "accel_crypto_key_create", 00:07:04.809 "accel_assign_opc", 00:07:04.809 "accel_get_module_info", 00:07:04.809 "accel_get_opc_assignments", 00:07:04.809 "vmd_rescan", 00:07:04.809 "vmd_remove_device", 00:07:04.809 "vmd_enable", 00:07:04.809 "sock_get_default_impl", 00:07:04.809 "sock_set_default_impl", 00:07:04.809 "sock_impl_set_options", 00:07:04.809 "sock_impl_get_options", 00:07:04.809 "iobuf_get_stats", 00:07:04.809 "iobuf_set_options", 00:07:04.809 "keyring_get_keys", 00:07:04.809 "vfu_tgt_set_base_path", 00:07:04.809 "framework_get_pci_devices", 00:07:04.809 "framework_get_config", 00:07:04.809 "framework_get_subsystems", 00:07:04.809 "fsdev_set_opts", 00:07:04.809 "fsdev_get_opts", 00:07:04.809 "trace_get_info", 00:07:04.809 "trace_get_tpoint_group_mask", 00:07:04.809 "trace_disable_tpoint_group", 00:07:04.809 "trace_enable_tpoint_group", 00:07:04.809 "trace_clear_tpoint_mask", 00:07:04.809 "trace_set_tpoint_mask", 00:07:04.809 "notify_get_notifications", 00:07:04.809 "notify_get_types", 00:07:04.809 "spdk_get_version", 00:07:04.809 "rpc_get_methods" 00:07:04.809 ] 00:07:05.068 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:05.068 14:27:11 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:05.068 14:27:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.068 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:05.068 14:27:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3663488 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3663488 ']' 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3663488 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663488 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663488' 00:07:05.069 killing process with pid 3663488 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3663488 00:07:05.069 14:27:11 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3663488 00:07:05.386 00:07:05.386 real 0m0.867s 00:07:05.386 user 0m1.457s 00:07:05.386 sys 0m0.345s 00:07:05.386 14:27:12 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.386 14:27:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.386 ************************************ 00:07:05.386 END TEST spdkcli_tcp 00:07:05.386 ************************************ 00:07:05.386 14:27:12 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:05.386 14:27:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.386 14:27:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.386 14:27:12 -- common/autotest_common.sh@10 -- # set +x 00:07:05.386 ************************************ 00:07:05.386 START TEST dpdk_mem_utility 00:07:05.386 ************************************ 00:07:05.386 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:05.386 * Looking for test storage... 00:07:05.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:05.386 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.386 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.386 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.386 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.386 14:27:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:05.387 14:27:12 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.387 14:27:12 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.387 14:27:12 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.387 14:27:12 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.387 --rc genhtml_branch_coverage=1 00:07:05.387 --rc genhtml_function_coverage=1 00:07:05.387 --rc genhtml_legend=1 00:07:05.387 --rc geninfo_all_blocks=1 00:07:05.387 --rc geninfo_unexecuted_blocks=1 00:07:05.387 00:07:05.387 ' 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.387 --rc genhtml_branch_coverage=1 00:07:05.387 --rc genhtml_function_coverage=1 00:07:05.387 --rc genhtml_legend=1 00:07:05.387 --rc geninfo_all_blocks=1 00:07:05.387 --rc geninfo_unexecuted_blocks=1 00:07:05.387 00:07:05.387 ' 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.387 --rc genhtml_branch_coverage=1 00:07:05.387 --rc genhtml_function_coverage=1 00:07:05.387 --rc genhtml_legend=1 00:07:05.387 --rc geninfo_all_blocks=1 00:07:05.387 --rc geninfo_unexecuted_blocks=1 00:07:05.387 00:07:05.387 ' 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.387 --rc genhtml_branch_coverage=1 00:07:05.387 --rc genhtml_function_coverage=1 00:07:05.387 --rc genhtml_legend=1 00:07:05.387 --rc geninfo_all_blocks=1 00:07:05.387 --rc geninfo_unexecuted_blocks=1 00:07:05.387 00:07:05.387 ' 00:07:05.387 14:27:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:05.387 14:27:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3663835 00:07:05.387 14:27:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3663835 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3663835 ']' 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.387 14:27:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:05.387 14:27:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.387 [2024-11-20 14:27:12.348450] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:05.387 [2024-11-20 14:27:12.348529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663835 ] 00:07:05.387 [2024-11-20 14:27:12.418997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.645 [2024-11-20 14:27:12.456944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.214 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.214 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:06.214 14:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:06.214 14:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:06.214 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.214 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:06.214 { 00:07:06.214 "filename": "/tmp/spdk_mem_dump.txt" 00:07:06.214 } 00:07:06.214 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.214 14:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:06.214 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:06.214 1 heaps totaling size 818.000000 MiB 00:07:06.214 size: 818.000000 MiB heap id: 0 00:07:06.214 end heaps---------- 00:07:06.214 9 mempools totaling size 603.782043 MiB 00:07:06.214 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:06.214 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:06.214 size: 100.555481 MiB name: bdev_io_3663835 00:07:06.214 size: 50.003479 MiB name: msgpool_3663835 00:07:06.214 size: 36.509338 MiB name: fsdev_io_3663835 00:07:06.214 size: 21.763794 MiB name: PDU_Pool 00:07:06.214 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:06.214 size: 4.133484 MiB name: evtpool_3663835 00:07:06.214 size: 0.026123 MiB name: Session_Pool 00:07:06.214 end mempools------- 00:07:06.214 6 memzones totaling size 4.142822 MiB 00:07:06.214 size: 1.000366 MiB name: RG_ring_0_3663835 00:07:06.214 size: 1.000366 MiB name: RG_ring_1_3663835 00:07:06.214 size: 1.000366 MiB name: RG_ring_4_3663835 00:07:06.214 size: 1.000366 MiB name: RG_ring_5_3663835 00:07:06.214 size: 0.125366 MiB name: RG_ring_2_3663835 00:07:06.214 size: 0.015991 MiB name: RG_ring_3_3663835 00:07:06.214 end memzones------- 00:07:06.214 14:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:06.214 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:06.214 list of free elements. size: 10.852478 MiB 00:07:06.214 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:06.214 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:06.214 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:06.214 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:06.214 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:06.214 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:06.214 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:06.214 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:06.214 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:07:06.214 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:06.214 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:06.214 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:06.214 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:06.214 element at address: 0x200028200000 with size: 0.410034 MiB 00:07:06.214 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:06.214 list of standard malloc elements. size: 199.218628 MiB 00:07:06.214 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:06.214 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:06.214 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:06.214 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:06.214 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:06.214 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:06.214 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:06.214 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:06.214 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:06.214 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:06.214 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:06.214 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:06.214 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:06.214 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:06.214 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:06.214 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:06.214 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:06.214 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:06.214 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:06.215 element at address: 0x200028268f80 with size: 0.000183 MiB 00:07:06.215 element at address: 0x200028269040 with size: 0.000183 MiB 00:07:06.215 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:07:06.215 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:06.215 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:06.215 list of memzone associated elements. size: 607.928894 MiB 00:07:06.215 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:06.215 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:06.215 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:06.215 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:06.215 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:06.215 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3663835_0 00:07:06.215 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:06.215 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3663835_0 00:07:06.215 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:06.215 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3663835_0 00:07:06.215 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:06.215 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:06.215 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:06.215 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:06.215 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:06.215 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3663835_0 00:07:06.215 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:06.215 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3663835 00:07:06.215 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:06.215 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3663835 00:07:06.215 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:06.215 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:06.215 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:06.215 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:06.215 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:06.215 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:06.215 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:06.215 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:06.215 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:06.215 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3663835 00:07:06.215 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:06.215 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3663835 00:07:06.215 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:06.215 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3663835 00:07:06.215 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:06.215 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3663835 00:07:06.215 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:06.215 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3663835 00:07:06.215 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:06.215 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3663835 00:07:06.215 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:06.215 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:06.215 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:06.215 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:06.215 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:06.215 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:06.215 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:06.215 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3663835 00:07:06.215 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:06.215 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3663835 00:07:06.215 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:06.215 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:06.215 element at address: 0x200028269100 with size: 0.023743 MiB 00:07:06.215 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:06.215 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:06.215 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3663835 00:07:06.215 element at address: 0x20002826f240 with size: 0.002441 MiB 00:07:06.215 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:06.215 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:06.215 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3663835 00:07:06.215 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:06.215 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3663835 00:07:06.215 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:06.215 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3663835 00:07:06.215 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:07:06.215 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:06.215 14:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:06.215 14:27:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3663835 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3663835 ']' 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3663835 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663835 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663835' 00:07:06.215 killing process with pid 3663835 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3663835 00:07:06.215 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3663835 00:07:06.473 00:07:06.473 real 0m1.266s 00:07:06.473 user 0m1.353s 00:07:06.473 sys 0m0.340s 00:07:06.473 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.474 14:27:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:06.474 ************************************ 00:07:06.474 END TEST dpdk_mem_utility 00:07:06.474 ************************************ 00:07:06.474 14:27:13 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:06.474 14:27:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.474 14:27:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.474 14:27:13 -- common/autotest_common.sh@10 -- # set +x 00:07:06.474 ************************************ 00:07:06.474 START TEST event 00:07:06.474 ************************************ 00:07:06.474 14:27:13 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:06.733 * Looking for test storage... 00:07:06.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.733 14:27:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.733 14:27:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.733 14:27:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.733 14:27:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.733 14:27:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.733 14:27:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.733 14:27:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.733 14:27:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.733 14:27:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.733 14:27:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.733 14:27:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.733 14:27:13 event -- scripts/common.sh@344 -- # case "$op" in 00:07:06.733 14:27:13 event -- scripts/common.sh@345 -- # : 1 00:07:06.733 14:27:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.733 14:27:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.733 14:27:13 event -- scripts/common.sh@365 -- # decimal 1 00:07:06.733 14:27:13 event -- scripts/common.sh@353 -- # local d=1 00:07:06.733 14:27:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.733 14:27:13 event -- scripts/common.sh@355 -- # echo 1 00:07:06.733 14:27:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.733 14:27:13 event -- scripts/common.sh@366 -- # decimal 2 00:07:06.733 14:27:13 event -- scripts/common.sh@353 -- # local d=2 00:07:06.733 14:27:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.733 14:27:13 event -- scripts/common.sh@355 -- # echo 2 00:07:06.733 14:27:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.733 14:27:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.733 14:27:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.733 14:27:13 event -- scripts/common.sh@368 -- # return 0 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.733 --rc genhtml_branch_coverage=1 00:07:06.733 --rc genhtml_function_coverage=1 00:07:06.733 --rc genhtml_legend=1 00:07:06.733 --rc geninfo_all_blocks=1 00:07:06.733 --rc geninfo_unexecuted_blocks=1 00:07:06.733 00:07:06.733 ' 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.733 --rc genhtml_branch_coverage=1 00:07:06.733 --rc genhtml_function_coverage=1 00:07:06.733 --rc genhtml_legend=1 00:07:06.733 --rc geninfo_all_blocks=1 00:07:06.733 --rc geninfo_unexecuted_blocks=1 00:07:06.733 00:07:06.733 ' 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.733 --rc genhtml_branch_coverage=1 00:07:06.733 --rc genhtml_function_coverage=1 00:07:06.733 --rc genhtml_legend=1 00:07:06.733 --rc geninfo_all_blocks=1 00:07:06.733 --rc geninfo_unexecuted_blocks=1 00:07:06.733 00:07:06.733 ' 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.733 --rc genhtml_branch_coverage=1 00:07:06.733 --rc genhtml_function_coverage=1 00:07:06.733 --rc genhtml_legend=1 00:07:06.733 --rc geninfo_all_blocks=1 00:07:06.733 --rc geninfo_unexecuted_blocks=1 00:07:06.733 00:07:06.733 ' 00:07:06.733 14:27:13 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:06.733 14:27:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.733 14:27:13 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:06.733 14:27:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.733 14:27:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.733 ************************************ 00:07:06.733 START TEST event_perf 00:07:06.733 ************************************ 00:07:06.733 14:27:13 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.733 Running I/O for 1 seconds...[2024-11-20 14:27:13.664953] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:06.733 [2024-11-20 14:27:13.665008] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664100 ] 00:07:06.733 [2024-11-20 14:27:13.734821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.733 [2024-11-20 14:27:13.777266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.733 [2024-11-20 14:27:13.777369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.733 [2024-11-20 14:27:13.777522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.733 Running I/O for 1 seconds...[2024-11-20 14:27:13.777523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.110 00:07:08.110 lcore 0: 187151 00:07:08.110 lcore 1: 187153 00:07:08.110 lcore 2: 187152 00:07:08.110 lcore 3: 187149 00:07:08.110 done. 00:07:08.110 00:07:08.110 real 0m1.149s 00:07:08.110 user 0m4.080s 00:07:08.110 sys 0m0.068s 00:07:08.110 14:27:14 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.110 14:27:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.110 ************************************ 00:07:08.110 END TEST event_perf 00:07:08.110 ************************************ 00:07:08.110 14:27:14 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:08.110 14:27:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:08.110 14:27:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.110 14:27:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.110 ************************************ 00:07:08.110 START TEST event_reactor 00:07:08.110 ************************************ 00:07:08.110 14:27:14 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:08.110 [2024-11-20 14:27:14.861618] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:08.110 [2024-11-20 14:27:14.861664] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664333 ] 00:07:08.110 [2024-11-20 14:27:14.926035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.110 [2024-11-20 14:27:14.954908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.044 test_start 00:07:09.044 oneshot 00:07:09.044 tick 100 00:07:09.044 tick 100 00:07:09.044 tick 250 00:07:09.044 tick 100 00:07:09.044 tick 100 00:07:09.044 tick 250 00:07:09.044 tick 100 00:07:09.044 tick 500 00:07:09.044 tick 100 00:07:09.044 tick 100 00:07:09.044 tick 250 00:07:09.044 tick 100 00:07:09.044 tick 100 00:07:09.044 test_end 00:07:09.044 00:07:09.044 real 0m1.128s 00:07:09.044 user 0m1.067s 00:07:09.044 sys 0m0.058s 00:07:09.044 14:27:15 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.044 14:27:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:09.044 ************************************ 00:07:09.044 END TEST event_reactor 00:07:09.044 ************************************ 00:07:09.044 14:27:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.044 14:27:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:09.044 14:27:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.044 14:27:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.044 ************************************ 00:07:09.044 START TEST event_reactor_perf 00:07:09.044 ************************************ 00:07:09.044 14:27:16 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.044 [2024-11-20 14:27:16.037947] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:09.044 [2024-11-20 14:27:16.037993] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664686 ] 00:07:09.044 [2024-11-20 14:27:16.103565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.303 [2024-11-20 14:27:16.131493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.238 test_start 00:07:10.238 test_end 00:07:10.238 Performance: 539145 events per second 00:07:10.238 00:07:10.238 real 0m1.128s 00:07:10.238 user 0m1.066s 00:07:10.238 sys 0m0.058s 00:07:10.238 14:27:17 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.238 14:27:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.238 ************************************ 00:07:10.238 END TEST event_reactor_perf 00:07:10.238 ************************************ 00:07:10.238 14:27:17 event -- event/event.sh@49 -- # uname -s 00:07:10.238 14:27:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:10.238 14:27:17 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:10.238 14:27:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.238 14:27:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.238 14:27:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.238 ************************************ 00:07:10.238 START TEST event_scheduler 00:07:10.239 ************************************ 00:07:10.239 14:27:17 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:10.239 * Looking for test storage... 00:07:10.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:10.239 14:27:17 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.239 14:27:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.239 14:27:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.499 14:27:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.499 14:27:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:10.499 14:27:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.500 --rc genhtml_branch_coverage=1 00:07:10.500 --rc genhtml_function_coverage=1 00:07:10.500 --rc genhtml_legend=1 00:07:10.500 --rc geninfo_all_blocks=1 00:07:10.500 --rc geninfo_unexecuted_blocks=1 00:07:10.500 00:07:10.500 ' 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.500 --rc genhtml_branch_coverage=1 00:07:10.500 --rc genhtml_function_coverage=1 00:07:10.500 --rc genhtml_legend=1 00:07:10.500 --rc geninfo_all_blocks=1 00:07:10.500 --rc geninfo_unexecuted_blocks=1 00:07:10.500 00:07:10.500 ' 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.500 --rc genhtml_branch_coverage=1 00:07:10.500 --rc genhtml_function_coverage=1 00:07:10.500 --rc genhtml_legend=1 00:07:10.500 --rc geninfo_all_blocks=1 00:07:10.500 --rc geninfo_unexecuted_blocks=1 00:07:10.500 00:07:10.500 ' 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.500 --rc genhtml_branch_coverage=1 00:07:10.500 --rc genhtml_function_coverage=1 00:07:10.500 --rc genhtml_legend=1 00:07:10.500 --rc geninfo_all_blocks=1 00:07:10.500 --rc geninfo_unexecuted_blocks=1 00:07:10.500 00:07:10.500 ' 00:07:10.500 14:27:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:10.500 14:27:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3665068 00:07:10.500 14:27:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.500 14:27:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3665068 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3665068 ']' 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.500 14:27:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.500 14:27:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.500 [2024-11-20 14:27:17.358812] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:10.500 [2024-11-20 14:27:17.358869] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665068 ] 00:07:10.500 [2024-11-20 14:27:17.442761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.500 [2024-11-20 14:27:17.494886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.500 [2024-11-20 14:27:17.495055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.500 [2024-11-20 14:27:17.495219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.500 [2024-11-20 14:27:17.495220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:11.438 14:27:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 [2024-11-20 14:27:18.153691] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:11.438 [2024-11-20 14:27:18.153705] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:11.438 [2024-11-20 14:27:18.153713] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:11.438 [2024-11-20 14:27:18.153718] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:11.438 [2024-11-20 14:27:18.153721] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 [2024-11-20 14:27:18.210775] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 ************************************ 00:07:11.438 START TEST scheduler_create_thread 00:07:11.438 ************************************ 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 2 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 3 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 4 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 5 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 6 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 7 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 8 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 9 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 10 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.438 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:11.439 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.439 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.439 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.439 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:11.439 14:27:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:11.439 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.439 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.006 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.006 00:07:12.006 real 0m0.591s 00:07:12.006 user 0m0.010s 00:07:12.006 sys 0m0.008s 00:07:12.006 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.006 14:27:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.006 ************************************ 00:07:12.006 END TEST scheduler_create_thread 00:07:12.006 ************************************ 00:07:12.006 14:27:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:12.006 14:27:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3665068 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3665068 ']' 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3665068 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665068 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665068' 00:07:12.006 killing process with pid 3665068 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3665068 00:07:12.006 14:27:18 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3665068 00:07:12.270 [2024-11-20 14:27:19.306388] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:12.532 00:07:12.532 real 0m2.197s 00:07:12.532 user 0m4.424s 00:07:12.532 sys 0m0.314s 00:07:12.532 14:27:19 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.532 14:27:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.532 ************************************ 00:07:12.532 END TEST event_scheduler 00:07:12.532 ************************************ 00:07:12.532 14:27:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:12.532 14:27:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:12.532 14:27:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.532 14:27:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.532 14:27:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.532 ************************************ 00:07:12.532 START TEST app_repeat 00:07:12.532 ************************************ 00:07:12.532 14:27:19 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3665466 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3665466' 00:07:12.532 Process app_repeat pid: 3665466 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:12.532 spdk_app_start Round 0 00:07:12.532 14:27:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3665466 /var/tmp/spdk-nbd.sock 00:07:12.532 14:27:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3665466 ']' 00:07:12.532 14:27:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.532 14:27:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.533 14:27:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.533 14:27:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.533 14:27:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.533 14:27:19 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:12.533 [2024-11-20 14:27:19.476304] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:12.533 [2024-11-20 14:27:19.476350] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665466 ] 00:07:12.533 [2024-11-20 14:27:19.541200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.533 [2024-11-20 14:27:19.573042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.533 [2024-11-20 14:27:19.573042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.792 14:27:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.792 14:27:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:12.792 14:27:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.792 Malloc0 00:07:12.792 14:27:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.051 Malloc1 00:07:13.051 14:27:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.051 14:27:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.052 14:27:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:13.310 /dev/nbd0 00:07:13.310 14:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.310 14:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.310 1+0 records in 00:07:13.310 1+0 records out 00:07:13.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175883 s, 23.3 MB/s 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.310 14:27:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.311 /dev/nbd1 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.311 1+0 records in 00:07:13.311 1+0 records out 00:07:13.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167477 s, 24.5 MB/s 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.311 14:27:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.311 14:27:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.570 { 00:07:13.570 "nbd_device": "/dev/nbd0", 00:07:13.570 "bdev_name": "Malloc0" 00:07:13.570 }, 00:07:13.570 { 00:07:13.570 "nbd_device": "/dev/nbd1", 00:07:13.570 "bdev_name": "Malloc1" 00:07:13.570 } 00:07:13.570 ]' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.570 { 00:07:13.570 "nbd_device": "/dev/nbd0", 00:07:13.570 "bdev_name": "Malloc0" 00:07:13.570 }, 00:07:13.570 { 00:07:13.570 "nbd_device": "/dev/nbd1", 00:07:13.570 "bdev_name": "Malloc1" 00:07:13.570 } 00:07:13.570 ]' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.570 /dev/nbd1' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.570 /dev/nbd1' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:13.570 256+0 records in 00:07:13.570 256+0 records out 00:07:13.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432434 s, 242 MB/s 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.570 256+0 records in 00:07:13.570 256+0 records out 00:07:13.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013654 s, 76.8 MB/s 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.570 256+0 records in 00:07:13.570 256+0 records out 00:07:13.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124878 s, 84.0 MB/s 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.570 14:27:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.829 14:27:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.088 14:27:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:14.088 14:27:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:14.088 14:27:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:14.346 14:27:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.346 [2024-11-20 14:27:21.374744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.346 [2024-11-20 14:27:21.403621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.346 [2024-11-20 14:27:21.403623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.604 [2024-11-20 14:27:21.433013] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.604 [2024-11-20 14:27:21.433044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.891 14:27:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.891 14:27:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:17.891 spdk_app_start Round 1 00:07:17.891 14:27:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3665466 /var/tmp/spdk-nbd.sock 00:07:17.891 14:27:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3665466 ']' 00:07:17.891 14:27:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.891 14:27:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.891 14:27:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.891 14:27:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.891 14:27:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.891 14:27:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.891 14:27:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:17.891 14:27:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.892 Malloc0 00:07:17.892 14:27:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.892 Malloc1 00:07:17.892 14:27:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.892 /dev/nbd0 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.892 14:27:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.892 14:27:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:17.892 14:27:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:17.892 14:27:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.892 14:27:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.892 14:27:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:17.892 14:27:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:17.892 14:27:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.892 14:27:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.892 14:27:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.892 1+0 records in 00:07:17.892 1+0 records out 00:07:17.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179243 s, 22.9 MB/s 00:07:18.151 14:27:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.151 14:27:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:18.151 14:27:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.151 14:27:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.151 14:27:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:18.151 14:27:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.151 14:27:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.151 14:27:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:18.151 /dev/nbd1 00:07:18.151 14:27:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.151 14:27:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.151 1+0 records in 00:07:18.151 1+0 records out 00:07:18.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157509 s, 26.0 MB/s 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.151 14:27:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:18.151 14:27:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.151 14:27:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.151 14:27:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.151 14:27:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.151 14:27:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.410 { 00:07:18.410 "nbd_device": "/dev/nbd0", 00:07:18.410 "bdev_name": "Malloc0" 00:07:18.410 }, 00:07:18.410 { 00:07:18.410 "nbd_device": "/dev/nbd1", 00:07:18.410 "bdev_name": "Malloc1" 00:07:18.410 } 00:07:18.410 ]' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.410 { 00:07:18.410 "nbd_device": "/dev/nbd0", 00:07:18.410 "bdev_name": "Malloc0" 00:07:18.410 }, 00:07:18.410 { 00:07:18.410 "nbd_device": "/dev/nbd1", 00:07:18.410 "bdev_name": "Malloc1" 00:07:18.410 } 00:07:18.410 ]' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.410 /dev/nbd1' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.410 /dev/nbd1' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.410 256+0 records in 00:07:18.410 256+0 records out 00:07:18.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430562 s, 244 MB/s 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.410 256+0 records in 00:07:18.410 256+0 records out 00:07:18.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119824 s, 87.5 MB/s 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.410 256+0 records in 00:07:18.410 256+0 records out 00:07:18.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012346 s, 84.9 MB/s 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.410 14:27:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.411 14:27:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.411 14:27:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.411 14:27:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.669 14:27:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.928 14:27:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.928 14:27:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.187 14:27:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.187 [2024-11-20 14:27:26.166230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.187 [2024-11-20 14:27:26.195019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.187 [2024-11-20 14:27:26.195019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.187 [2024-11-20 14:27:26.224791] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.187 [2024-11-20 14:27:26.224822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.490 14:27:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:22.490 14:27:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:22.490 spdk_app_start Round 2 00:07:22.490 14:27:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3665466 /var/tmp/spdk-nbd.sock 00:07:22.490 14:27:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3665466 ']' 00:07:22.490 14:27:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.490 14:27:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.490 14:27:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.490 14:27:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.490 14:27:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.490 14:27:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.490 14:27:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:22.490 14:27:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.490 Malloc0 00:07:22.490 14:27:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.749 Malloc1 00:07:22.749 14:27:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:22.749 /dev/nbd0 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.749 1+0 records in 00:07:22.749 1+0 records out 00:07:22.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179705 s, 22.8 MB/s 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.749 14:27:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.749 14:27:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:23.008 /dev/nbd1 00:07:23.008 14:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:23.008 14:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:23.008 14:27:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:23.008 14:27:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:23.008 14:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.008 14:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.008 14:27:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:23.008 14:27:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:23.008 14:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.008 14:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.008 14:27:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.008 1+0 records in 00:07:23.008 1+0 records out 00:07:23.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199383 s, 20.5 MB/s 00:07:23.009 14:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:23.009 14:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:23.009 14:27:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:23.009 14:27:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.009 14:27:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:23.009 14:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.009 14:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.009 14:27:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.009 14:27:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.009 14:27:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:23.268 { 00:07:23.268 "nbd_device": "/dev/nbd0", 00:07:23.268 "bdev_name": "Malloc0" 00:07:23.268 }, 00:07:23.268 { 00:07:23.268 "nbd_device": "/dev/nbd1", 00:07:23.268 "bdev_name": "Malloc1" 00:07:23.268 } 00:07:23.268 ]' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:23.268 { 00:07:23.268 "nbd_device": "/dev/nbd0", 00:07:23.268 "bdev_name": "Malloc0" 00:07:23.268 }, 00:07:23.268 { 00:07:23.268 "nbd_device": "/dev/nbd1", 00:07:23.268 "bdev_name": "Malloc1" 00:07:23.268 } 00:07:23.268 ]' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:23.268 /dev/nbd1' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:23.268 /dev/nbd1' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:23.268 256+0 records in 00:07:23.268 256+0 records out 00:07:23.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430234 s, 244 MB/s 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:23.268 256+0 records in 00:07:23.268 256+0 records out 00:07:23.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012077 s, 86.8 MB/s 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:23.268 256+0 records in 00:07:23.268 256+0 records out 00:07:23.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129201 s, 81.2 MB/s 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.268 14:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.527 14:27:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.787 14:27:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.787 14:27:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:24.046 14:27:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:24.046 [2024-11-20 14:27:30.956147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.046 [2024-11-20 14:27:30.985293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.046 [2024-11-20 14:27:30.985293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.046 [2024-11-20 14:27:31.014640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:24.046 [2024-11-20 14:27:31.014676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:27.331 14:27:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3665466 /var/tmp/spdk-nbd.sock 00:07:27.331 14:27:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3665466 ']' 00:07:27.331 14:27:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:27.331 14:27:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.331 14:27:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:27.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:27.331 14:27:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.331 14:27:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:27.331 14:27:34 event.app_repeat -- event/event.sh@39 -- # killprocess 3665466 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3665466 ']' 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3665466 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665466 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665466' 00:07:27.331 killing process with pid 3665466 00:07:27.331 14:27:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3665466 00:07:27.332 14:27:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3665466 00:07:27.332 spdk_app_start is called in Round 0. 00:07:27.332 Shutdown signal received, stop current app iteration 00:07:27.332 Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 reinitialization... 00:07:27.332 spdk_app_start is called in Round 1. 00:07:27.332 Shutdown signal received, stop current app iteration 00:07:27.332 Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 reinitialization... 00:07:27.332 spdk_app_start is called in Round 2. 00:07:27.332 Shutdown signal received, stop current app iteration 00:07:27.332 Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 reinitialization... 00:07:27.332 spdk_app_start is called in Round 3. 00:07:27.332 Shutdown signal received, stop current app iteration 00:07:27.332 14:27:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:27.332 14:27:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:27.332 00:07:27.332 real 0m14.702s 00:07:27.332 user 0m32.252s 00:07:27.332 sys 0m1.788s 00:07:27.332 14:27:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.332 14:27:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.332 ************************************ 00:07:27.332 END TEST app_repeat 00:07:27.332 ************************************ 00:07:27.332 14:27:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:27.332 14:27:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:27.332 14:27:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.332 14:27:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.332 14:27:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.332 ************************************ 00:07:27.332 START TEST cpu_locks 00:07:27.332 ************************************ 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:27.332 * Looking for test storage... 00:07:27.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.332 14:27:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.332 --rc genhtml_branch_coverage=1 00:07:27.332 --rc genhtml_function_coverage=1 00:07:27.332 --rc genhtml_legend=1 00:07:27.332 --rc geninfo_all_blocks=1 00:07:27.332 --rc geninfo_unexecuted_blocks=1 00:07:27.332 00:07:27.332 ' 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.332 --rc genhtml_branch_coverage=1 00:07:27.332 --rc genhtml_function_coverage=1 00:07:27.332 --rc genhtml_legend=1 00:07:27.332 --rc geninfo_all_blocks=1 00:07:27.332 --rc geninfo_unexecuted_blocks=1 00:07:27.332 00:07:27.332 ' 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.332 --rc genhtml_branch_coverage=1 00:07:27.332 --rc genhtml_function_coverage=1 00:07:27.332 --rc genhtml_legend=1 00:07:27.332 --rc geninfo_all_blocks=1 00:07:27.332 --rc geninfo_unexecuted_blocks=1 00:07:27.332 00:07:27.332 ' 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.332 --rc genhtml_branch_coverage=1 00:07:27.332 --rc genhtml_function_coverage=1 00:07:27.332 --rc genhtml_legend=1 00:07:27.332 --rc geninfo_all_blocks=1 00:07:27.332 --rc geninfo_unexecuted_blocks=1 00:07:27.332 00:07:27.332 ' 00:07:27.332 14:27:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:27.332 14:27:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:27.332 14:27:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:27.332 14:27:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.332 14:27:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.332 ************************************ 00:07:27.332 START TEST default_locks 00:07:27.332 ************************************ 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3669041 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3669041 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3669041 ']' 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.332 14:27:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.591 [2024-11-20 14:27:34.393652] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:27.591 [2024-11-20 14:27:34.393701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669041 ] 00:07:27.591 [2024-11-20 14:27:34.460475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.591 [2024-11-20 14:27:34.490863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3669041 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3669041 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.849 lslocks: write error 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3669041 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3669041 ']' 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3669041 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3669041 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3669041' 00:07:27.849 killing process with pid 3669041 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3669041 00:07:27.849 14:27:34 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3669041 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3669041 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3669041 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3669041 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3669041 ']' 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3669041) - No such process 00:07:28.108 ERROR: process (pid: 3669041) is no longer running 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:28.108 00:07:28.108 real 0m0.681s 00:07:28.108 user 0m0.646s 00:07:28.108 sys 0m0.344s 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.108 14:27:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.108 ************************************ 00:07:28.108 END TEST default_locks 00:07:28.108 ************************************ 00:07:28.108 14:27:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:28.108 14:27:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.108 14:27:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.108 14:27:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.108 ************************************ 00:07:28.108 START TEST default_locks_via_rpc 00:07:28.108 ************************************ 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3669393 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3669393 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3669393 ']' 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.108 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.108 [2024-11-20 14:27:35.116706] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:28.108 [2024-11-20 14:27:35.116756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669393 ] 00:07:28.379 [2024-11-20 14:27:35.181772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.379 [2024-11-20 14:27:35.210421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3669393 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3669393 00:07:28.379 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3669393 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3669393 ']' 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3669393 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3669393 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3669393' 00:07:28.640 killing process with pid 3669393 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3669393 00:07:28.640 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3669393 00:07:28.900 00:07:28.900 real 0m0.701s 00:07:28.900 user 0m0.681s 00:07:28.900 sys 0m0.341s 00:07:28.900 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.900 14:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.900 ************************************ 00:07:28.900 END TEST default_locks_via_rpc 00:07:28.900 ************************************ 00:07:28.900 14:27:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:28.900 14:27:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.900 14:27:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.900 14:27:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.900 ************************************ 00:07:28.900 START TEST non_locking_app_on_locked_coremask 00:07:28.901 ************************************ 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3669433 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3669433 /var/tmp/spdk.sock 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3669433 ']' 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.901 14:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.901 [2024-11-20 14:27:35.864132] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:28.901 [2024-11-20 14:27:35.864181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669433 ] 00:07:28.901 [2024-11-20 14:27:35.929150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.901 [2024-11-20 14:27:35.958957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3669547 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3669547 /var/tmp/spdk2.sock 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3669547 ']' 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:29.161 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.161 [2024-11-20 14:27:36.165813] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:29.161 [2024-11-20 14:27:36.165863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669547 ] 00:07:29.421 [2024-11-20 14:27:36.262299] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:29.421 [2024-11-20 14:27:36.262328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.421 [2024-11-20 14:27:36.324568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.990 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.990 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:29.990 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3669433 00:07:29.990 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3669433 00:07:29.990 14:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.250 lslocks: write error 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3669433 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3669433 ']' 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3669433 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3669433 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3669433' 00:07:30.250 killing process with pid 3669433 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3669433 00:07:30.250 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3669433 00:07:30.817 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3669547 00:07:30.817 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3669547 ']' 00:07:30.817 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3669547 00:07:30.817 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:30.817 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.817 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3669547 00:07:30.817 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.817 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.817 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3669547' 00:07:30.818 killing process with pid 3669547 00:07:30.818 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3669547 00:07:30.818 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3669547 00:07:30.818 00:07:30.818 real 0m2.027s 00:07:30.818 user 0m2.179s 00:07:30.818 sys 0m0.676s 00:07:30.818 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.818 14:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.818 ************************************ 00:07:30.818 END TEST non_locking_app_on_locked_coremask 00:07:30.818 ************************************ 00:07:30.818 14:27:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:30.818 14:27:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.818 14:27:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.818 14:27:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.077 ************************************ 00:07:31.077 START TEST locking_app_on_unlocked_coremask 00:07:31.077 ************************************ 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3670117 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3670117 /var/tmp/spdk.sock 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3670117 ']' 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.077 14:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:31.077 [2024-11-20 14:27:37.938871] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:31.077 [2024-11-20 14:27:37.938920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670117 ] 00:07:31.077 [2024-11-20 14:27:38.004585] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.077 [2024-11-20 14:27:38.004611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.077 [2024-11-20 14:27:38.033901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3670133 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3670133 /var/tmp/spdk2.sock 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3670133 ']' 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.336 14:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:31.336 [2024-11-20 14:27:38.241036] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:31.336 [2024-11-20 14:27:38.241088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670133 ] 00:07:31.336 [2024-11-20 14:27:38.337322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.595 [2024-11-20 14:27:38.399644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.176 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.176 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:32.176 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3670133 00:07:32.176 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3670133 00:07:32.176 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.435 lslocks: write error 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3670117 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3670117 ']' 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3670117 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3670117 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3670117' 00:07:32.435 killing process with pid 3670117 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3670117 00:07:32.435 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3670117 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3670133 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3670133 ']' 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3670133 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3670133 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3670133' 00:07:32.694 killing process with pid 3670133 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3670133 00:07:32.694 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3670133 00:07:32.953 00:07:32.953 real 0m2.006s 00:07:32.953 user 0m2.160s 00:07:32.953 sys 0m0.670s 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.953 ************************************ 00:07:32.953 END TEST locking_app_on_unlocked_coremask 00:07:32.953 ************************************ 00:07:32.953 14:27:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:32.953 14:27:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.953 14:27:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.953 14:27:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.953 ************************************ 00:07:32.953 START TEST locking_app_on_locked_coremask 00:07:32.953 ************************************ 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3670508 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3670508 /var/tmp/spdk.sock 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3670508 ']' 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.953 14:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.953 [2024-11-20 14:27:39.992005] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:32.953 [2024-11-20 14:27:39.992054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670508 ] 00:07:33.212 [2024-11-20 14:27:40.059189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.213 [2024-11-20 14:27:40.091828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3670511 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3670511 /var/tmp/spdk2.sock 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3670511 /var/tmp/spdk2.sock 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3670511 /var/tmp/spdk2.sock 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3670511 ']' 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.213 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.527 [2024-11-20 14:27:40.295367] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:33.527 [2024-11-20 14:27:40.295420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670511 ] 00:07:33.527 [2024-11-20 14:27:40.388627] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3670508 has claimed it. 00:07:33.527 [2024-11-20 14:27:40.388658] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:34.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3670511) - No such process 00:07:34.091 ERROR: process (pid: 3670511) is no longer running 00:07:34.091 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.091 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:34.091 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:34.091 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.091 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.091 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.091 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3670508 00:07:34.091 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3670508 00:07:34.091 14:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.091 lslocks: write error 00:07:34.091 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3670508 00:07:34.091 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3670508 ']' 00:07:34.091 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3670508 00:07:34.091 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:34.091 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.091 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3670508 00:07:34.091 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.091 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.091 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3670508' 00:07:34.091 killing process with pid 3670508 00:07:34.092 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3670508 00:07:34.092 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3670508 00:07:34.350 00:07:34.350 real 0m1.326s 00:07:34.350 user 0m1.442s 00:07:34.350 sys 0m0.418s 00:07:34.350 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.350 14:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.350 ************************************ 00:07:34.350 END TEST locking_app_on_locked_coremask 00:07:34.350 ************************************ 00:07:34.350 14:27:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:34.350 14:27:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.350 14:27:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.350 14:27:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.350 ************************************ 00:07:34.350 START TEST locking_overlapped_coremask 00:07:34.350 ************************************ 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3670875 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3670875 /var/tmp/spdk.sock 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3670875 ']' 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.350 14:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:34.350 [2024-11-20 14:27:41.367107] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:34.351 [2024-11-20 14:27:41.367160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670875 ] 00:07:34.609 [2024-11-20 14:27:41.433137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.609 [2024-11-20 14:27:41.466280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.609 [2024-11-20 14:27:41.466361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.609 [2024-11-20 14:27:41.466363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3670882 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3670882 /var/tmp/spdk2.sock 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3670882 /var/tmp/spdk2.sock 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3670882 /var/tmp/spdk2.sock 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3670882 ']' 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.609 14:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.868 [2024-11-20 14:27:41.670618] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:34.868 [2024-11-20 14:27:41.670668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670882 ] 00:07:34.868 [2024-11-20 14:27:41.791319] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3670875 has claimed it. 00:07:34.868 [2024-11-20 14:27:41.791359] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:35.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3670882) - No such process 00:07:35.435 ERROR: process (pid: 3670882) is no longer running 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3670875 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3670875 ']' 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3670875 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3670875 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3670875' 00:07:35.435 killing process with pid 3670875 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3670875 00:07:35.435 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3670875 00:07:35.694 00:07:35.694 real 0m1.198s 00:07:35.694 user 0m3.315s 00:07:35.694 sys 0m0.330s 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.694 ************************************ 00:07:35.694 END TEST locking_overlapped_coremask 00:07:35.694 ************************************ 00:07:35.694 14:27:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:35.694 14:27:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.694 14:27:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.694 14:27:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.694 ************************************ 00:07:35.694 START TEST locking_overlapped_coremask_via_rpc 00:07:35.694 ************************************ 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3671240 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3671240 /var/tmp/spdk.sock 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3671240 ']' 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.694 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:35.694 [2024-11-20 14:27:42.612032] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:35.694 [2024-11-20 14:27:42.612079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671240 ] 00:07:35.694 [2024-11-20 14:27:42.677528] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:35.694 [2024-11-20 14:27:42.677555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.694 [2024-11-20 14:27:42.708162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.694 [2024-11-20 14:27:42.708312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.694 [2024-11-20 14:27:42.708495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3671249 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3671249 /var/tmp/spdk2.sock 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3671249 ']' 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.954 14:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:35.954 [2024-11-20 14:27:42.913548] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:35.954 [2024-11-20 14:27:42.913597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671249 ] 00:07:35.954 [2024-11-20 14:27:43.012048] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:35.954 [2024-11-20 14:27:43.012074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.213 [2024-11-20 14:27:43.071132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.213 [2024-11-20 14:27:43.074367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.213 [2024-11-20 14:27:43.074369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.781 [2024-11-20 14:27:43.711313] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3671240 has claimed it. 00:07:36.781 request: 00:07:36.781 { 00:07:36.781 "method": "framework_enable_cpumask_locks", 00:07:36.781 "req_id": 1 00:07:36.781 } 00:07:36.781 Got JSON-RPC error response 00:07:36.781 response: 00:07:36.781 { 00:07:36.781 "code": -32603, 00:07:36.781 "message": "Failed to claim CPU core: 2" 00:07:36.781 } 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3671240 /var/tmp/spdk.sock 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3671240 ']' 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.781 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.040 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:37.040 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3671249 /var/tmp/spdk2.sock 00:07:37.040 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3671249 ']' 00:07:37.040 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.040 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.040 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.040 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.040 14:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 14:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.040 14:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:37.040 14:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:37.040 14:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:37.040 14:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:37.040 14:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:37.040 00:07:37.040 real 0m1.473s 00:07:37.040 user 0m0.659s 00:07:37.040 sys 0m0.103s 00:07:37.040 14:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.040 14:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 ************************************ 00:07:37.041 END TEST locking_overlapped_coremask_via_rpc 00:07:37.041 ************************************ 00:07:37.041 14:27:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:37.041 14:27:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3671240 ]] 00:07:37.041 14:27:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3671240 00:07:37.041 14:27:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3671240 ']' 00:07:37.041 14:27:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3671240 00:07:37.041 14:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:37.041 14:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.041 14:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671240 00:07:37.299 14:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.299 14:27:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.299 14:27:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671240' 00:07:37.300 killing process with pid 3671240 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3671240 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3671240 00:07:37.300 14:27:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3671249 ]] 00:07:37.300 14:27:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3671249 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3671249 ']' 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3671249 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671249 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671249' 00:07:37.300 killing process with pid 3671249 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3671249 00:07:37.300 14:27:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3671249 00:07:37.559 14:27:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:37.559 14:27:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:37.559 14:27:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3671240 ]] 00:07:37.559 14:27:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3671240 00:07:37.559 14:27:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3671240 ']' 00:07:37.559 14:27:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3671240 00:07:37.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3671240) - No such process 00:07:37.559 14:27:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3671240 is not found' 00:07:37.559 Process with pid 3671240 is not found 00:07:37.559 14:27:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3671249 ]] 00:07:37.559 14:27:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3671249 00:07:37.559 14:27:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3671249 ']' 00:07:37.559 14:27:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3671249 00:07:37.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3671249) - No such process 00:07:37.559 14:27:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3671249 is not found' 00:07:37.559 Process with pid 3671249 is not found 00:07:37.559 14:27:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:37.559 00:07:37.559 real 0m10.336s 00:07:37.559 user 0m18.964s 00:07:37.559 sys 0m3.587s 00:07:37.559 14:27:44 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.559 14:27:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 ************************************ 00:07:37.559 END TEST cpu_locks 00:07:37.559 ************************************ 00:07:37.559 00:07:37.559 real 0m31.074s 00:07:37.559 user 1m2.024s 00:07:37.559 sys 0m6.155s 00:07:37.559 14:27:44 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.559 14:27:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 ************************************ 00:07:37.559 END TEST event 00:07:37.559 ************************************ 00:07:37.559 14:27:44 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:37.559 14:27:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.559 14:27:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.559 14:27:44 -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 ************************************ 00:07:37.559 START TEST thread 00:07:37.559 ************************************ 00:07:37.559 14:27:44 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:37.818 * Looking for test storage... 00:07:37.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.819 14:27:44 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.819 14:27:44 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.819 14:27:44 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.819 14:27:44 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.819 14:27:44 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.819 14:27:44 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.819 14:27:44 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.819 14:27:44 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.819 14:27:44 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.819 14:27:44 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.819 14:27:44 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.819 14:27:44 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:37.819 14:27:44 thread -- scripts/common.sh@345 -- # : 1 00:07:37.819 14:27:44 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.819 14:27:44 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.819 14:27:44 thread -- scripts/common.sh@365 -- # decimal 1 00:07:37.819 14:27:44 thread -- scripts/common.sh@353 -- # local d=1 00:07:37.819 14:27:44 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.819 14:27:44 thread -- scripts/common.sh@355 -- # echo 1 00:07:37.819 14:27:44 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.819 14:27:44 thread -- scripts/common.sh@366 -- # decimal 2 00:07:37.819 14:27:44 thread -- scripts/common.sh@353 -- # local d=2 00:07:37.819 14:27:44 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.819 14:27:44 thread -- scripts/common.sh@355 -- # echo 2 00:07:37.819 14:27:44 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.819 14:27:44 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.819 14:27:44 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.819 14:27:44 thread -- scripts/common.sh@368 -- # return 0 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.819 --rc genhtml_branch_coverage=1 00:07:37.819 --rc genhtml_function_coverage=1 00:07:37.819 --rc genhtml_legend=1 00:07:37.819 --rc geninfo_all_blocks=1 00:07:37.819 --rc geninfo_unexecuted_blocks=1 00:07:37.819 00:07:37.819 ' 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.819 --rc genhtml_branch_coverage=1 00:07:37.819 --rc genhtml_function_coverage=1 00:07:37.819 --rc genhtml_legend=1 00:07:37.819 --rc geninfo_all_blocks=1 00:07:37.819 --rc geninfo_unexecuted_blocks=1 00:07:37.819 00:07:37.819 ' 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.819 --rc genhtml_branch_coverage=1 00:07:37.819 --rc genhtml_function_coverage=1 00:07:37.819 --rc genhtml_legend=1 00:07:37.819 --rc geninfo_all_blocks=1 00:07:37.819 --rc geninfo_unexecuted_blocks=1 00:07:37.819 00:07:37.819 ' 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.819 --rc genhtml_branch_coverage=1 00:07:37.819 --rc genhtml_function_coverage=1 00:07:37.819 --rc genhtml_legend=1 00:07:37.819 --rc geninfo_all_blocks=1 00:07:37.819 --rc geninfo_unexecuted_blocks=1 00:07:37.819 00:07:37.819 ' 00:07:37.819 14:27:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.819 14:27:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.819 ************************************ 00:07:37.819 START TEST thread_poller_perf 00:07:37.819 ************************************ 00:07:37.819 14:27:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.819 [2024-11-20 14:27:44.772902] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:37.819 [2024-11-20 14:27:44.772946] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671695 ] 00:07:37.819 [2024-11-20 14:27:44.837586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.819 [2024-11-20 14:27:44.868087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.819 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:39.196 [2024-11-20T13:27:46.256Z] ====================================== 00:07:39.196 [2024-11-20T13:27:46.256Z] busy:2405350328 (cyc) 00:07:39.196 [2024-11-20T13:27:46.256Z] total_run_count: 419000 00:07:39.196 [2024-11-20T13:27:46.256Z] tsc_hz: 2400000000 (cyc) 00:07:39.196 [2024-11-20T13:27:46.256Z] ====================================== 00:07:39.196 [2024-11-20T13:27:46.256Z] poller_cost: 5740 (cyc), 2391 (nsec) 00:07:39.196 00:07:39.196 real 0m1.137s 00:07:39.196 user 0m1.084s 00:07:39.196 sys 0m0.049s 00:07:39.196 14:27:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.196 14:27:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:39.196 ************************************ 00:07:39.196 END TEST thread_poller_perf 00:07:39.196 ************************************ 00:07:39.196 14:27:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:39.196 14:27:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:39.196 14:27:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.196 14:27:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.196 ************************************ 00:07:39.196 START TEST thread_poller_perf 00:07:39.196 ************************************ 00:07:39.196 14:27:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:39.196 [2024-11-20 14:27:45.959017] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:39.196 [2024-11-20 14:27:45.959062] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672043 ] 00:07:39.196 [2024-11-20 14:27:46.024135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.196 [2024-11-20 14:27:46.052945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.196 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:40.130 [2024-11-20T13:27:47.190Z] ====================================== 00:07:40.130 [2024-11-20T13:27:47.190Z] busy:2401429846 (cyc) 00:07:40.130 [2024-11-20T13:27:47.190Z] total_run_count: 5562000 00:07:40.130 [2024-11-20T13:27:47.190Z] tsc_hz: 2400000000 (cyc) 00:07:40.130 [2024-11-20T13:27:47.190Z] ====================================== 00:07:40.130 [2024-11-20T13:27:47.190Z] poller_cost: 431 (cyc), 179 (nsec) 00:07:40.130 00:07:40.130 real 0m1.130s 00:07:40.130 user 0m1.066s 00:07:40.130 sys 0m0.059s 00:07:40.130 14:27:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.130 14:27:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:40.130 ************************************ 00:07:40.130 END TEST thread_poller_perf 00:07:40.130 ************************************ 00:07:40.130 14:27:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:40.130 00:07:40.130 real 0m2.486s 00:07:40.130 user 0m2.256s 00:07:40.130 sys 0m0.231s 00:07:40.130 14:27:47 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.130 14:27:47 thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.130 ************************************ 00:07:40.130 END TEST thread 00:07:40.130 ************************************ 00:07:40.130 14:27:47 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:40.130 14:27:47 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:40.130 14:27:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.130 14:27:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.130 14:27:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.130 ************************************ 00:07:40.130 START TEST app_cmdline 00:07:40.130 ************************************ 00:07:40.130 14:27:47 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:40.389 * Looking for test storage... 00:07:40.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:40.389 14:27:47 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:40.389 14:27:47 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:40.389 14:27:47 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:40.389 14:27:47 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.389 14:27:47 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:40.389 14:27:47 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.389 14:27:47 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.389 --rc genhtml_branch_coverage=1 00:07:40.389 --rc genhtml_function_coverage=1 00:07:40.389 --rc genhtml_legend=1 00:07:40.389 --rc geninfo_all_blocks=1 00:07:40.389 --rc geninfo_unexecuted_blocks=1 00:07:40.389 00:07:40.389 ' 00:07:40.389 14:27:47 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:40.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.389 --rc genhtml_branch_coverage=1 00:07:40.389 --rc genhtml_function_coverage=1 00:07:40.389 --rc genhtml_legend=1 00:07:40.389 --rc geninfo_all_blocks=1 00:07:40.389 --rc geninfo_unexecuted_blocks=1 00:07:40.389 00:07:40.389 ' 00:07:40.389 14:27:47 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:40.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.390 --rc genhtml_branch_coverage=1 00:07:40.390 --rc genhtml_function_coverage=1 00:07:40.390 --rc genhtml_legend=1 00:07:40.390 --rc geninfo_all_blocks=1 00:07:40.390 --rc geninfo_unexecuted_blocks=1 00:07:40.390 00:07:40.390 ' 00:07:40.390 14:27:47 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:40.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.390 --rc genhtml_branch_coverage=1 00:07:40.390 --rc genhtml_function_coverage=1 00:07:40.390 --rc genhtml_legend=1 00:07:40.390 --rc geninfo_all_blocks=1 00:07:40.390 --rc geninfo_unexecuted_blocks=1 00:07:40.390 00:07:40.390 ' 00:07:40.390 14:27:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:40.390 14:27:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3672441 00:07:40.390 14:27:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3672441 00:07:40.390 14:27:47 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3672441 ']' 00:07:40.390 14:27:47 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.390 14:27:47 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.390 14:27:47 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.390 14:27:47 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.390 14:27:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:40.390 14:27:47 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:40.390 [2024-11-20 14:27:47.313323] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:40.390 [2024-11-20 14:27:47.313387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672441 ] 00:07:40.390 [2024-11-20 14:27:47.381036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.390 [2024-11-20 14:27:47.415267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.648 14:27:47 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.648 14:27:47 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:40.648 14:27:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:40.908 { 00:07:40.908 "version": "SPDK v25.01-pre git sha1 a361eb5e2", 00:07:40.908 "fields": { 00:07:40.908 "major": 25, 00:07:40.908 "minor": 1, 00:07:40.908 "patch": 0, 00:07:40.908 "suffix": "-pre", 00:07:40.908 "commit": "a361eb5e2" 00:07:40.908 } 00:07:40.908 } 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.908 request: 00:07:40.908 { 00:07:40.908 "method": "env_dpdk_get_mem_stats", 00:07:40.908 "req_id": 1 00:07:40.908 } 00:07:40.908 Got JSON-RPC error response 00:07:40.908 response: 00:07:40.908 { 00:07:40.908 "code": -32601, 00:07:40.908 "message": "Method not found" 00:07:40.908 } 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.908 14:27:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3672441 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3672441 ']' 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3672441 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.908 14:27:47 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3672441 00:07:41.167 14:27:47 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.167 14:27:47 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.167 14:27:47 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3672441' 00:07:41.167 killing process with pid 3672441 00:07:41.167 14:27:47 app_cmdline -- common/autotest_common.sh@973 -- # kill 3672441 00:07:41.167 14:27:47 app_cmdline -- common/autotest_common.sh@978 -- # wait 3672441 00:07:41.167 00:07:41.167 real 0m1.005s 00:07:41.167 user 0m1.201s 00:07:41.167 sys 0m0.334s 00:07:41.167 14:27:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.167 14:27:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.167 ************************************ 00:07:41.167 END TEST app_cmdline 00:07:41.167 ************************************ 00:07:41.167 14:27:48 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:41.167 14:27:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.167 14:27:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.167 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:07:41.167 ************************************ 00:07:41.167 START TEST version 00:07:41.167 ************************************ 00:07:41.167 14:27:48 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:41.425 * Looking for test storage... 00:07:41.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.425 14:27:48 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.425 14:27:48 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.425 14:27:48 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.426 14:27:48 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.426 14:27:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.426 14:27:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.426 14:27:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.426 14:27:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.426 14:27:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.426 14:27:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.426 14:27:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.426 14:27:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.426 14:27:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.426 14:27:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.426 14:27:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.426 14:27:48 version -- scripts/common.sh@344 -- # case "$op" in 00:07:41.426 14:27:48 version -- scripts/common.sh@345 -- # : 1 00:07:41.426 14:27:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.426 14:27:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.426 14:27:48 version -- scripts/common.sh@365 -- # decimal 1 00:07:41.426 14:27:48 version -- scripts/common.sh@353 -- # local d=1 00:07:41.426 14:27:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.426 14:27:48 version -- scripts/common.sh@355 -- # echo 1 00:07:41.426 14:27:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.426 14:27:48 version -- scripts/common.sh@366 -- # decimal 2 00:07:41.426 14:27:48 version -- scripts/common.sh@353 -- # local d=2 00:07:41.426 14:27:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.426 14:27:48 version -- scripts/common.sh@355 -- # echo 2 00:07:41.426 14:27:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.426 14:27:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.426 14:27:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.426 14:27:48 version -- scripts/common.sh@368 -- # return 0 00:07:41.426 14:27:48 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.426 14:27:48 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.426 --rc genhtml_branch_coverage=1 00:07:41.426 --rc genhtml_function_coverage=1 00:07:41.426 --rc genhtml_legend=1 00:07:41.426 --rc geninfo_all_blocks=1 00:07:41.426 --rc geninfo_unexecuted_blocks=1 00:07:41.426 00:07:41.426 ' 00:07:41.426 14:27:48 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.426 --rc genhtml_branch_coverage=1 00:07:41.426 --rc genhtml_function_coverage=1 00:07:41.426 --rc genhtml_legend=1 00:07:41.426 --rc geninfo_all_blocks=1 00:07:41.426 --rc geninfo_unexecuted_blocks=1 00:07:41.426 00:07:41.426 ' 00:07:41.426 14:27:48 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.426 --rc genhtml_branch_coverage=1 00:07:41.426 --rc genhtml_function_coverage=1 00:07:41.426 --rc genhtml_legend=1 00:07:41.426 --rc geninfo_all_blocks=1 00:07:41.426 --rc geninfo_unexecuted_blocks=1 00:07:41.426 00:07:41.426 ' 00:07:41.426 14:27:48 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.426 --rc genhtml_branch_coverage=1 00:07:41.426 --rc genhtml_function_coverage=1 00:07:41.426 --rc genhtml_legend=1 00:07:41.426 --rc geninfo_all_blocks=1 00:07:41.426 --rc geninfo_unexecuted_blocks=1 00:07:41.426 00:07:41.426 ' 00:07:41.426 14:27:48 version -- app/version.sh@17 -- # get_header_version major 00:07:41.426 14:27:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:41.426 14:27:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.426 14:27:48 version -- app/version.sh@14 -- # cut -f2 00:07:41.426 14:27:48 version -- app/version.sh@17 -- # major=25 00:07:41.426 14:27:48 version -- app/version.sh@18 -- # get_header_version minor 00:07:41.426 14:27:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:41.426 14:27:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.426 14:27:48 version -- app/version.sh@14 -- # cut -f2 00:07:41.426 14:27:48 version -- app/version.sh@18 -- # minor=1 00:07:41.426 14:27:48 version -- app/version.sh@19 -- # get_header_version patch 00:07:41.426 14:27:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:41.426 14:27:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.426 14:27:48 version -- app/version.sh@14 -- # cut -f2 00:07:41.426 14:27:48 version -- app/version.sh@19 -- # patch=0 00:07:41.426 14:27:48 version -- app/version.sh@20 -- # get_header_version suffix 00:07:41.426 14:27:48 version -- app/version.sh@14 -- # cut -f2 00:07:41.426 14:27:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.426 14:27:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:41.426 14:27:48 version -- app/version.sh@20 -- # suffix=-pre 00:07:41.426 14:27:48 version -- app/version.sh@22 -- # version=25.1 00:07:41.426 14:27:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:41.426 14:27:48 version -- app/version.sh@28 -- # version=25.1rc0 00:07:41.426 14:27:48 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:41.426 14:27:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:41.426 14:27:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:41.426 14:27:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:41.426 00:07:41.426 real 0m0.182s 00:07:41.426 user 0m0.107s 00:07:41.426 sys 0m0.098s 00:07:41.426 14:27:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.426 14:27:48 version -- common/autotest_common.sh@10 -- # set +x 00:07:41.426 ************************************ 00:07:41.426 END TEST version 00:07:41.426 ************************************ 00:07:41.426 14:27:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:41.426 14:27:48 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:41.426 14:27:48 -- spdk/autotest.sh@194 -- # uname -s 00:07:41.426 14:27:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:41.426 14:27:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:41.426 14:27:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:41.426 14:27:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:41.426 14:27:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:41.426 14:27:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:41.426 14:27:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.426 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:07:41.426 14:27:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:41.426 14:27:48 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:41.426 14:27:48 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:41.426 14:27:48 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:41.426 14:27:48 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:41.426 14:27:48 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:41.426 14:27:48 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:41.426 14:27:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.426 14:27:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.426 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:07:41.426 ************************************ 00:07:41.426 START TEST nvmf_tcp 00:07:41.426 ************************************ 00:07:41.426 14:27:48 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:41.685 * Looking for test storage... 00:07:41.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:41.685 14:27:48 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.685 14:27:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.685 14:27:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.685 14:27:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.685 14:27:48 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:41.685 14:27:48 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.685 14:27:48 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.685 --rc genhtml_branch_coverage=1 00:07:41.685 --rc genhtml_function_coverage=1 00:07:41.685 --rc genhtml_legend=1 00:07:41.685 --rc geninfo_all_blocks=1 00:07:41.685 --rc geninfo_unexecuted_blocks=1 00:07:41.685 00:07:41.685 ' 00:07:41.685 14:27:48 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.686 --rc genhtml_branch_coverage=1 00:07:41.686 --rc genhtml_function_coverage=1 00:07:41.686 --rc genhtml_legend=1 00:07:41.686 --rc geninfo_all_blocks=1 00:07:41.686 --rc geninfo_unexecuted_blocks=1 00:07:41.686 00:07:41.686 ' 00:07:41.686 14:27:48 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.686 --rc genhtml_branch_coverage=1 00:07:41.686 --rc genhtml_function_coverage=1 00:07:41.686 --rc genhtml_legend=1 00:07:41.686 --rc geninfo_all_blocks=1 00:07:41.686 --rc geninfo_unexecuted_blocks=1 00:07:41.686 00:07:41.686 ' 00:07:41.686 14:27:48 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.686 --rc genhtml_branch_coverage=1 00:07:41.686 --rc genhtml_function_coverage=1 00:07:41.686 --rc genhtml_legend=1 00:07:41.686 --rc geninfo_all_blocks=1 00:07:41.686 --rc geninfo_unexecuted_blocks=1 00:07:41.686 00:07:41.686 ' 00:07:41.686 14:27:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:41.686 14:27:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:41.686 14:27:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:41.686 14:27:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.686 14:27:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.686 14:27:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.686 ************************************ 00:07:41.686 START TEST nvmf_target_core 00:07:41.686 ************************************ 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:41.686 * Looking for test storage... 00:07:41.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.686 --rc genhtml_branch_coverage=1 00:07:41.686 --rc genhtml_function_coverage=1 00:07:41.686 --rc genhtml_legend=1 00:07:41.686 --rc geninfo_all_blocks=1 00:07:41.686 --rc geninfo_unexecuted_blocks=1 00:07:41.686 00:07:41.686 ' 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.686 --rc genhtml_branch_coverage=1 00:07:41.686 --rc genhtml_function_coverage=1 00:07:41.686 --rc genhtml_legend=1 00:07:41.686 --rc geninfo_all_blocks=1 00:07:41.686 --rc geninfo_unexecuted_blocks=1 00:07:41.686 00:07:41.686 ' 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.686 --rc genhtml_branch_coverage=1 00:07:41.686 --rc genhtml_function_coverage=1 00:07:41.686 --rc genhtml_legend=1 00:07:41.686 --rc geninfo_all_blocks=1 00:07:41.686 --rc geninfo_unexecuted_blocks=1 00:07:41.686 00:07:41.686 ' 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.686 --rc genhtml_branch_coverage=1 00:07:41.686 --rc genhtml_function_coverage=1 00:07:41.686 --rc genhtml_legend=1 00:07:41.686 --rc geninfo_all_blocks=1 00:07:41.686 --rc geninfo_unexecuted_blocks=1 00:07:41.686 00:07:41.686 ' 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.686 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.945 14:27:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.946 ************************************ 00:07:41.946 START TEST nvmf_abort 00:07:41.946 ************************************ 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:41.946 * Looking for test storage... 00:07:41.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.946 --rc genhtml_branch_coverage=1 00:07:41.946 --rc genhtml_function_coverage=1 00:07:41.946 --rc genhtml_legend=1 00:07:41.946 --rc geninfo_all_blocks=1 00:07:41.946 --rc geninfo_unexecuted_blocks=1 00:07:41.946 00:07:41.946 ' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.946 --rc genhtml_branch_coverage=1 00:07:41.946 --rc genhtml_function_coverage=1 00:07:41.946 --rc genhtml_legend=1 00:07:41.946 --rc geninfo_all_blocks=1 00:07:41.946 --rc geninfo_unexecuted_blocks=1 00:07:41.946 00:07:41.946 ' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.946 --rc genhtml_branch_coverage=1 00:07:41.946 --rc genhtml_function_coverage=1 00:07:41.946 --rc genhtml_legend=1 00:07:41.946 --rc geninfo_all_blocks=1 00:07:41.946 --rc geninfo_unexecuted_blocks=1 00:07:41.946 00:07:41.946 ' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.946 --rc genhtml_branch_coverage=1 00:07:41.946 --rc genhtml_function_coverage=1 00:07:41.946 --rc genhtml_legend=1 00:07:41.946 --rc geninfo_all_blocks=1 00:07:41.946 --rc geninfo_unexecuted_blocks=1 00:07:41.946 00:07:41.946 ' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.946 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.947 14:27:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:47.271 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:47.271 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:47.271 Found net devices under 0000:31:00.0: cvl_0_0 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:47.271 Found net devices under 0000:31:00.1: cvl_0_1 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.271 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.272 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:07:47.531 00:07:47.531 --- 10.0.0.2 ping statistics --- 00:07:47.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.531 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:07:47.531 00:07:47.531 --- 10.0.0.1 ping statistics --- 00:07:47.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.531 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3676881 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3676881 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3676881 ']' 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:47.531 14:27:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:47.531 [2024-11-20 14:27:54.562460] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:47.531 [2024-11-20 14:27:54.562527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.790 [2024-11-20 14:27:54.653567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.790 [2024-11-20 14:27:54.707656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.790 [2024-11-20 14:27:54.707709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.790 [2024-11-20 14:27:54.707718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.790 [2024-11-20 14:27:54.707726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.790 [2024-11-20 14:27:54.707732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.790 [2024-11-20 14:27:54.709639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.790 [2024-11-20 14:27:54.709795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.790 [2024-11-20 14:27:54.709795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.362 [2024-11-20 14:27:55.401198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.362 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.622 Malloc0 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.622 Delay0 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.622 [2024-11-20 14:27:55.467780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.622 14:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:48.622 [2024-11-20 14:27:55.531573] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:51.160 Initializing NVMe Controllers 00:07:51.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:51.161 controller IO queue size 128 less than required 00:07:51.161 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:51.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:51.161 Initialization complete. Launching workers. 00:07:51.161 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28581 00:07:51.161 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28642, failed to submit 62 00:07:51.161 success 28585, unsuccessful 57, failed 0 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.161 rmmod nvme_tcp 00:07:51.161 rmmod nvme_fabrics 00:07:51.161 rmmod nvme_keyring 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3676881 ']' 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3676881 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3676881 ']' 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3676881 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3676881 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3676881' 00:07:51.161 killing process with pid 3676881 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3676881 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3676881 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.161 14:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:53.069 00:07:53.069 real 0m11.104s 00:07:53.069 user 0m12.635s 00:07:53.069 sys 0m5.051s 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.069 ************************************ 00:07:53.069 END TEST nvmf_abort 00:07:53.069 ************************************ 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.069 ************************************ 00:07:53.069 START TEST nvmf_ns_hotplug_stress 00:07:53.069 ************************************ 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:53.069 * Looking for test storage... 00:07:53.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:53.069 14:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:53.069 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.070 --rc genhtml_branch_coverage=1 00:07:53.070 --rc genhtml_function_coverage=1 00:07:53.070 --rc genhtml_legend=1 00:07:53.070 --rc geninfo_all_blocks=1 00:07:53.070 --rc geninfo_unexecuted_blocks=1 00:07:53.070 00:07:53.070 ' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.070 --rc genhtml_branch_coverage=1 00:07:53.070 --rc genhtml_function_coverage=1 00:07:53.070 --rc genhtml_legend=1 00:07:53.070 --rc geninfo_all_blocks=1 00:07:53.070 --rc geninfo_unexecuted_blocks=1 00:07:53.070 00:07:53.070 ' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.070 --rc genhtml_branch_coverage=1 00:07:53.070 --rc genhtml_function_coverage=1 00:07:53.070 --rc genhtml_legend=1 00:07:53.070 --rc geninfo_all_blocks=1 00:07:53.070 --rc geninfo_unexecuted_blocks=1 00:07:53.070 00:07:53.070 ' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.070 --rc genhtml_branch_coverage=1 00:07:53.070 --rc genhtml_function_coverage=1 00:07:53.070 --rc genhtml_legend=1 00:07:53.070 --rc geninfo_all_blocks=1 00:07:53.070 --rc geninfo_unexecuted_blocks=1 00:07:53.070 00:07:53.070 ' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.070 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.071 14:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:59.793 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:59.793 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.793 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:59.794 Found net devices under 0000:31:00.0: cvl_0_0 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:59.794 Found net devices under 0000:31:00.1: cvl_0_1 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:07:59.794 00:07:59.794 --- 10.0.0.2 ping statistics --- 00:07:59.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.794 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:07:59.794 00:07:59.794 --- 10.0.0.1 ping statistics --- 00:07:59.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.794 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3682001 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3682001 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3682001 ']' 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.794 14:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.794 [2024-11-20 14:28:05.915148] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:07:59.794 [2024-11-20 14:28:05.915215] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.794 [2024-11-20 14:28:05.992396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.794 [2024-11-20 14:28:06.029042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.794 [2024-11-20 14:28:06.029080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.794 [2024-11-20 14:28:06.029086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.794 [2024-11-20 14:28:06.029091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.794 [2024-11-20 14:28:06.029095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.794 [2024-11-20 14:28:06.030613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.794 [2024-11-20 14:28:06.030635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.794 [2024-11-20 14:28:06.030636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.794 14:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.794 14:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:59.794 14:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:59.794 14:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.794 14:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.794 14:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.794 14:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:59.794 14:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:00.053 [2024-11-20 14:28:06.875753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.053 14:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:00.053 14:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.313 [2024-11-20 14:28:07.196854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.313 14:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.313 14:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:00.576 Malloc0 00:08:00.576 14:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.838 Delay0 00:08:00.838 14:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.838 14:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:01.098 NULL1 00:08:01.098 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:01.358 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3682498 00:08:01.358 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:01.358 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.358 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:01.358 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.618 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:01.618 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:01.618 [2024-11-20 14:28:08.635842] bdev.c:5424:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:08:01.618 true 00:08:01.618 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:01.618 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.877 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.137 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:02.137 14:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:02.137 true 00:08:02.137 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:02.137 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.395 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.395 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:02.395 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:02.653 true 00:08:02.653 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:02.653 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.912 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.912 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:02.912 14:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:03.171 true 00:08:03.171 14:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:03.171 14:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.430 14:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.430 14:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:03.430 14:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:03.689 true 00:08:03.689 14:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:03.689 14:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.626 Read completed with error (sct=0, sc=11) 00:08:04.626 14:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.626 14:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:04.626 14:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:04.885 true 00:08:04.885 14:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:04.885 14:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.885 14:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.145 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:05.145 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:05.405 true 00:08:05.405 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:05.405 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.405 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.664 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:05.664 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:05.664 true 00:08:05.664 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:05.664 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.924 14:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.183 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:06.183 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:06.183 true 00:08:06.183 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:06.183 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.442 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.701 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:06.701 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:06.701 true 00:08:06.701 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:06.701 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.960 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.960 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:06.960 14:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:07.219 true 00:08:07.219 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:07.219 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.478 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.478 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:07.478 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:07.737 true 00:08:07.737 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:07.737 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.737 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.997 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:07.997 14:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:08.255 true 00:08:08.255 14:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:08.255 14:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.255 14:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.514 14:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:08.514 14:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:08.514 true 00:08:08.773 14:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:08.773 14:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.711 14:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.711 14:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:09.711 14:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:09.969 true 00:08:09.969 14:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:09.969 14:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.969 14:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.228 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:10.228 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:10.487 true 00:08:10.487 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:10.487 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.487 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.746 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:10.746 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:10.746 true 00:08:10.746 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:10.746 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.005 14:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.263 14:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:11.263 14:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:11.263 true 00:08:11.263 14:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:11.263 14:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.522 14:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.522 14:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:11.522 14:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:11.782 true 00:08:11.782 14:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:11.782 14:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.716 14:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.716 14:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:12.716 14:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:12.976 true 00:08:12.976 14:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:12.976 14:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.234 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.234 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:13.234 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:13.493 true 00:08:13.493 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:13.493 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.493 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.752 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:13.752 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:14.010 true 00:08:14.011 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:14.011 14:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.011 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.270 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:14.270 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:14.270 true 00:08:14.270 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:14.270 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.528 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.787 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:14.787 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:14.787 true 00:08:14.787 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:14.787 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.045 14:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.304 14:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:15.304 14:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:15.304 true 00:08:15.304 14:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:15.304 14:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.562 14:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.562 14:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:15.562 14:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:15.821 true 00:08:15.821 14:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:15.821 14:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.758 14:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.758 14:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:16.758 14:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:17.016 true 00:08:17.016 14:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:17.016 14:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.276 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.276 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:17.276 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:17.534 true 00:08:17.534 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:17.534 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.793 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.793 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:17.793 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:18.051 true 00:08:18.051 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:18.051 14:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.051 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.310 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:18.310 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:18.310 true 00:08:18.569 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:18.569 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.569 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.827 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:18.827 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:18.827 true 00:08:18.827 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:18.827 14:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.761 14:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.020 14:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:20.020 14:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:20.020 true 00:08:20.020 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:20.020 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.278 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.537 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:20.537 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:20.537 true 00:08:20.537 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:20.537 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.796 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.055 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:21.055 14:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:21.055 true 00:08:21.055 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:21.055 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.314 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.314 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:21.314 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:21.573 true 00:08:21.573 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:21.573 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.832 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.832 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:21.832 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:22.091 true 00:08:22.091 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:22.091 14:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.091 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.350 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:22.350 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:22.610 true 00:08:22.610 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:22.610 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.610 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.869 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:22.869 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:22.869 true 00:08:22.869 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:22.869 14:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.805 14:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.063 14:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:24.063 14:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:24.063 true 00:08:24.063 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:24.063 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.322 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.580 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:24.580 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:24.580 true 00:08:24.580 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:24.580 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.838 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.838 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:24.838 14:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:25.098 true 00:08:25.098 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:25.098 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.355 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.355 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:25.355 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:25.614 true 00:08:25.614 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:25.614 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.614 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.873 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:25.873 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:26.133 true 00:08:26.133 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:26.133 14:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.067 14:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.067 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:27.067 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:27.325 true 00:08:27.325 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:27.325 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.325 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.584 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:27.584 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:27.584 true 00:08:27.843 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:27.843 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.843 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.112 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:28.112 14:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:28.112 true 00:08:28.112 14:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:28.112 14:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.051 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.309 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:29.309 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:29.309 true 00:08:29.309 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:29.309 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.569 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.828 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:29.828 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:29.828 true 00:08:29.828 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:29.828 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.087 14:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.087 14:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:30.087 14:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:30.346 true 00:08:30.346 14:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:30.346 14:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.282 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.541 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:31.541 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:31.541 Initializing NVMe Controllers 00:08:31.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.541 Controller IO queue size 128, less than required. 00:08:31.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.541 Controller IO queue size 128, less than required. 00:08:31.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:31.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:31.541 Initialization complete. Launching workers. 00:08:31.541 ======================================================== 00:08:31.541 Latency(us) 00:08:31.542 Device Information : IOPS MiB/s Average min max 00:08:31.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 392.34 0.19 105914.31 1512.48 1008052.28 00:08:31.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10105.52 4.93 12625.45 1187.54 400793.94 00:08:31.542 ======================================================== 00:08:31.542 Total : 10497.86 5.13 16111.94 1187.54 1008052.28 00:08:31.542 00:08:31.542 true 00:08:31.542 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3682498 00:08:31.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3682498) - No such process 00:08:31.542 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3682498 00:08:31.542 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.801 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.801 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:31.801 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:31.801 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:31.801 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.801 14:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:32.060 null0 00:08:32.060 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.060 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.060 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:32.318 null1 00:08:32.318 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.318 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.318 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:32.318 null2 00:08:32.318 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.318 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.318 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:32.576 null3 00:08:32.576 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.576 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.576 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:32.576 null4 00:08:32.834 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.834 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.834 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:32.834 null5 00:08:32.834 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.834 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.834 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:33.093 null6 00:08:33.093 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.093 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.093 14:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:33.093 null7 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:33.093 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3689853 3689854 3689856 3689857 3689858 3689861 3689863 3689864 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.094 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.353 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.353 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.353 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.353 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.353 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.353 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.353 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.353 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.611 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.612 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.871 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.130 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.130 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.130 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.130 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.130 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.130 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.130 14:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.130 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.389 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.649 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.908 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.167 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.167 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.167 14:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.167 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.495 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.816 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.816 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.816 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.816 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.816 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.817 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.078 14:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.078 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.338 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.597 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.856 rmmod nvme_tcp 00:08:36.856 rmmod nvme_fabrics 00:08:36.856 rmmod nvme_keyring 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3682001 ']' 00:08:36.856 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3682001 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3682001 ']' 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3682001 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3682001 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3682001' 00:08:36.857 killing process with pid 3682001 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3682001 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3682001 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.857 14:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.392 14:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.392 00:08:39.392 real 0m46.023s 00:08:39.392 user 3m7.847s 00:08:39.392 sys 0m13.718s 00:08:39.392 14:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.392 14:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.392 ************************************ 00:08:39.392 END TEST nvmf_ns_hotplug_stress 00:08:39.392 ************************************ 00:08:39.393 14:28:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:39.393 14:28:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.393 14:28:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.393 14:28:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.393 ************************************ 00:08:39.393 START TEST nvmf_delete_subsystem 00:08:39.393 ************************************ 00:08:39.393 14:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:39.393 * Looking for test storage... 00:08:39.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.393 --rc genhtml_branch_coverage=1 00:08:39.393 --rc genhtml_function_coverage=1 00:08:39.393 --rc genhtml_legend=1 00:08:39.393 --rc geninfo_all_blocks=1 00:08:39.393 --rc geninfo_unexecuted_blocks=1 00:08:39.393 00:08:39.393 ' 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.393 --rc genhtml_branch_coverage=1 00:08:39.393 --rc genhtml_function_coverage=1 00:08:39.393 --rc genhtml_legend=1 00:08:39.393 --rc geninfo_all_blocks=1 00:08:39.393 --rc geninfo_unexecuted_blocks=1 00:08:39.393 00:08:39.393 ' 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.393 --rc genhtml_branch_coverage=1 00:08:39.393 --rc genhtml_function_coverage=1 00:08:39.393 --rc genhtml_legend=1 00:08:39.393 --rc geninfo_all_blocks=1 00:08:39.393 --rc geninfo_unexecuted_blocks=1 00:08:39.393 00:08:39.393 ' 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.393 --rc genhtml_branch_coverage=1 00:08:39.393 --rc genhtml_function_coverage=1 00:08:39.393 --rc genhtml_legend=1 00:08:39.393 --rc geninfo_all_blocks=1 00:08:39.393 --rc geninfo_unexecuted_blocks=1 00:08:39.393 00:08:39.393 ' 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.393 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.394 14:28:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:44.669 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.669 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:44.929 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:44.929 Found net devices under 0000:31:00.0: cvl_0_0 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:44.929 Found net devices under 0000:31:00.1: cvl_0_1 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.929 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.188 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.188 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.188 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:08:45.188 00:08:45.188 --- 10.0.0.2 ping statistics --- 00:08:45.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.188 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:08:45.188 14:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:08:45.188 00:08:45.188 --- 10.0.0.1 ping statistics --- 00:08:45.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.188 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3695315 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3695315 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3695315 ']' 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.188 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.189 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.189 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.189 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.189 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:45.189 [2024-11-20 14:28:52.075512] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:08:45.189 [2024-11-20 14:28:52.075579] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.189 [2024-11-20 14:28:52.164895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.189 [2024-11-20 14:28:52.216646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.189 [2024-11-20 14:28:52.216699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.189 [2024-11-20 14:28:52.216714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.189 [2024-11-20 14:28:52.216721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.189 [2024-11-20 14:28:52.216727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.189 [2024-11-20 14:28:52.218575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.189 [2024-11-20 14:28:52.218582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.126 [2024-11-20 14:28:52.895995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.126 [2024-11-20 14:28:52.916187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.126 NULL1 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.126 Delay0 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3695407 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:46.126 14:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:46.126 [2024-11-20 14:28:52.997004] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:48.032 14:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.032 14:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.032 14:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.291 Write completed with error (sct=0, sc=8) 00:08:48.291 Read completed with error (sct=0, sc=8) 00:08:48.291 Write completed with error (sct=0, sc=8) 00:08:48.291 starting I/O failed: -6 00:08:48.291 Read completed with error (sct=0, sc=8) 00:08:48.291 Read completed with error (sct=0, sc=8) 00:08:48.291 Write completed with error (sct=0, sc=8) 00:08:48.291 Read completed with error (sct=0, sc=8) 00:08:48.291 starting I/O failed: -6 00:08:48.291 Read completed with error (sct=0, sc=8) 00:08:48.291 Read completed with error (sct=0, sc=8) 00:08:48.291 Write completed with error (sct=0, sc=8) 00:08:48.291 Write completed with error (sct=0, sc=8) 00:08:48.291 starting I/O failed: -6 00:08:48.291 Read completed with error (sct=0, sc=8) 00:08:48.291 Write completed with error (sct=0, sc=8) 00:08:48.291 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 [2024-11-20 14:28:55.205799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa200e0 is same with the state(6) to be set 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 starting I/O failed: -6 00:08:48.292 [2024-11-20 14:28:55.206338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8f8000c40 is same with the state(6) to be set 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Write completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.292 Read completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Write completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Write completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Read completed with error (sct=0, sc=8) 00:08:48.293 Write completed with error (sct=0, sc=8) 00:08:49.231 [2024-11-20 14:28:56.177629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa215e0 is same with the state(6) to be set 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 [2024-11-20 14:28:56.207912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8f800d7c0 is same with the state(6) to be set 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Write completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.231 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 [2024-11-20 14:28:56.208130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8f800d020 is same with the state(6) to be set 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 [2024-11-20 14:28:56.208547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1ff00 is same with the state(6) to be set 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Write completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 Read completed with error (sct=0, sc=8) 00:08:49.232 [2024-11-20 14:28:56.208825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa202c0 is same with the state(6) to be set 00:08:49.232 Initializing NVMe Controllers 00:08:49.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.232 Controller IO queue size 128, less than required. 00:08:49.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:49.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:49.232 Initialization complete. Launching workers. 00:08:49.232 ======================================================== 00:08:49.232 Latency(us) 00:08:49.232 Device Information : IOPS MiB/s Average min max 00:08:49.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.99 0.08 909070.49 475.25 2002132.97 00:08:49.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.03 0.08 974329.25 228.15 2001547.89 00:08:49.232 ======================================================== 00:08:49.232 Total : 336.02 0.16 940927.58 228.15 2002132.97 00:08:49.232 00:08:49.232 [2024-11-20 14:28:56.209139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa215e0 (9): Bad file descriptor 00:08:49.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:49.232 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.232 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:49.232 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3695407 00:08:49.232 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3695407 00:08:49.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3695407) - No such process 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3695407 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3695407 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3695407 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.800 [2024-11-20 14:28:56.729698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3696408 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3696408 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:49.800 14:28:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:49.800 [2024-11-20 14:28:56.788038] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:50.368 14:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.368 14:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3696408 00:08:50.368 14:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.935 14:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.935 14:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3696408 00:08:50.935 14:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.503 14:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.503 14:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3696408 00:08:51.503 14:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.761 14:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.761 14:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3696408 00:08:51.761 14:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:52.329 14:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:52.329 14:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3696408 00:08:52.329 14:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:52.897 14:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:52.897 14:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3696408 00:08:52.897 14:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.156 Initializing NVMe Controllers 00:08:53.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:53.156 Controller IO queue size 128, less than required. 00:08:53.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:53.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:53.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:53.156 Initialization complete. Launching workers. 00:08:53.156 ======================================================== 00:08:53.156 Latency(us) 00:08:53.156 Device Information : IOPS MiB/s Average min max 00:08:53.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002820.35 1000301.79 1042487.48 00:08:53.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004370.03 1000131.81 1042824.07 00:08:53.156 ======================================================== 00:08:53.156 Total : 256.00 0.12 1003595.19 1000131.81 1042824.07 00:08:53.156 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3696408 00:08:53.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3696408) - No such process 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3696408 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.416 rmmod nvme_tcp 00:08:53.416 rmmod nvme_fabrics 00:08:53.416 rmmod nvme_keyring 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3695315 ']' 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3695315 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3695315 ']' 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3695315 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3695315 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3695315' 00:08:53.416 killing process with pid 3695315 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3695315 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3695315 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:53.416 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:53.675 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:53.675 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:53.675 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.675 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:53.675 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.675 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.675 14:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.584 00:08:55.584 real 0m16.531s 00:08:55.584 user 0m30.289s 00:08:55.584 sys 0m5.489s 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.584 ************************************ 00:08:55.584 END TEST nvmf_delete_subsystem 00:08:55.584 ************************************ 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.584 ************************************ 00:08:55.584 START TEST nvmf_host_management 00:08:55.584 ************************************ 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:55.584 * Looking for test storage... 00:08:55.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.584 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.844 --rc genhtml_branch_coverage=1 00:08:55.844 --rc genhtml_function_coverage=1 00:08:55.844 --rc genhtml_legend=1 00:08:55.844 --rc geninfo_all_blocks=1 00:08:55.844 --rc geninfo_unexecuted_blocks=1 00:08:55.844 00:08:55.844 ' 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.844 --rc genhtml_branch_coverage=1 00:08:55.844 --rc genhtml_function_coverage=1 00:08:55.844 --rc genhtml_legend=1 00:08:55.844 --rc geninfo_all_blocks=1 00:08:55.844 --rc geninfo_unexecuted_blocks=1 00:08:55.844 00:08:55.844 ' 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.844 --rc genhtml_branch_coverage=1 00:08:55.844 --rc genhtml_function_coverage=1 00:08:55.844 --rc genhtml_legend=1 00:08:55.844 --rc geninfo_all_blocks=1 00:08:55.844 --rc geninfo_unexecuted_blocks=1 00:08:55.844 00:08:55.844 ' 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.844 --rc genhtml_branch_coverage=1 00:08:55.844 --rc genhtml_function_coverage=1 00:08:55.844 --rc genhtml_legend=1 00:08:55.844 --rc geninfo_all_blocks=1 00:08:55.844 --rc geninfo_unexecuted_blocks=1 00:08:55.844 00:08:55.844 ' 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.844 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:55.845 14:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:01.118 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:01.118 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.118 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:01.119 Found net devices under 0000:31:00.0: cvl_0_0 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:01.119 Found net devices under 0000:31:00.1: cvl_0_1 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.119 14:29:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:09:01.119 00:09:01.119 --- 10.0.0.2 ping statistics --- 00:09:01.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.119 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:09:01.119 00:09:01.119 --- 10.0.0.1 ping statistics --- 00:09:01.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.119 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.119 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.378 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3701442 00:09:01.378 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3701442 00:09:01.378 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:01.378 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3701442 ']' 00:09:01.378 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.378 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.378 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.378 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.378 14:29:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.378 [2024-11-20 14:29:08.218955] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:09:01.378 [2024-11-20 14:29:08.219022] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.378 [2024-11-20 14:29:08.295435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.378 [2024-11-20 14:29:08.333068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.378 [2024-11-20 14:29:08.333107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.378 [2024-11-20 14:29:08.333113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.378 [2024-11-20 14:29:08.333118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.378 [2024-11-20 14:29:08.333123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.378 [2024-11-20 14:29:08.334555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.378 [2024-11-20 14:29:08.334715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.378 [2024-11-20 14:29:08.334873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.378 [2024-11-20 14:29:08.334875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 [2024-11-20 14:29:09.039618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:02.314 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.315 Malloc0 00:09:02.315 [2024-11-20 14:29:09.103922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3701814 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3701814 /var/tmp/bdevperf.sock 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3701814 ']' 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:02.315 { 00:09:02.315 "params": { 00:09:02.315 "name": "Nvme$subsystem", 00:09:02.315 "trtype": "$TEST_TRANSPORT", 00:09:02.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:02.315 "adrfam": "ipv4", 00:09:02.315 "trsvcid": "$NVMF_PORT", 00:09:02.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:02.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:02.315 "hdgst": ${hdgst:-false}, 00:09:02.315 "ddgst": ${ddgst:-false} 00:09:02.315 }, 00:09:02.315 "method": "bdev_nvme_attach_controller" 00:09:02.315 } 00:09:02.315 EOF 00:09:02.315 )") 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:02.315 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:02.315 "params": { 00:09:02.315 "name": "Nvme0", 00:09:02.315 "trtype": "tcp", 00:09:02.315 "traddr": "10.0.0.2", 00:09:02.315 "adrfam": "ipv4", 00:09:02.315 "trsvcid": "4420", 00:09:02.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:02.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:02.315 "hdgst": false, 00:09:02.315 "ddgst": false 00:09:02.315 }, 00:09:02.315 "method": "bdev_nvme_attach_controller" 00:09:02.315 }' 00:09:02.315 [2024-11-20 14:29:09.175986] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:09:02.315 [2024-11-20 14:29:09.176037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701814 ] 00:09:02.315 [2024-11-20 14:29:09.255040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.315 [2024-11-20 14:29:09.291278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.574 Running I/O for 10 seconds... 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.144 14:29:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.144 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.144 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:03.144 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:03.144 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:03.144 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:03.144 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:03.144 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:03.144 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.144 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.144 [2024-11-20 14:29:10.032550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.144 [2024-11-20 14:29:10.032593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.144 [2024-11-20 14:29:10.032604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.144 [2024-11-20 14:29:10.032614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.144 [2024-11-20 14:29:10.032622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.144 [2024-11-20 14:29:10.032630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.144 [2024-11-20 14:29:10.032638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.144 [2024-11-20 14:29:10.032645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.144 [2024-11-20 14:29:10.032653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6db00 is same with the state(6) to be set 00:09:03.144 [2024-11-20 14:29:10.033491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.144 [2024-11-20 14:29:10.033509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.144 [2024-11-20 14:29:10.033524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.144 [2024-11-20 14:29:10.033532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.144 [2024-11-20 14:29:10.033542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.144 [2024-11-20 14:29:10.033551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.144 [2024-11-20 14:29:10.033560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.144 [2024-11-20 14:29:10.033568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.144 [2024-11-20 14:29:10.033577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.144 [2024-11-20 14:29:10.033585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.144 [2024-11-20 14:29:10.033594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.144 [2024-11-20 14:29:10.033602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.033989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.033999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.145 [2024-11-20 14:29:10.034208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.145 [2024-11-20 14:29:10.034216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.034616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:03.146 [2024-11-20 14:29:10.034623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.146 [2024-11-20 14:29:10.035839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:03.146 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.146 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:03.146 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.146 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.146 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:03.146 00:09:03.146 Latency(us) 00:09:03.146 [2024-11-20T13:29:10.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.146 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:03.146 Job: Nvme0n1 ended in about 0.43 seconds with error 00:09:03.146 Verification LBA range: start 0x0 length 0x400 00:09:03.146 Nvme0n1 : 0.43 1472.56 92.04 147.26 0.00 38368.47 1536.00 32331.09 00:09:03.146 [2024-11-20T13:29:10.206Z] =================================================================================================================== 00:09:03.146 [2024-11-20T13:29:10.206Z] Total : 1472.56 92.04 147.26 0.00 38368.47 1536.00 32331.09 00:09:03.146 [2024-11-20 14:29:10.037862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:03.146 [2024-11-20 14:29:10.037884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6db00 (9): Bad file descriptor 00:09:03.146 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.146 14:29:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:03.146 [2024-11-20 14:29:10.058567] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:04.085 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3701814 00:09:04.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3701814) - No such process 00:09:04.085 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:04.085 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:04.085 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:04.085 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:04.085 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:04.085 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:04.085 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:04.085 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:04.085 { 00:09:04.085 "params": { 00:09:04.085 "name": "Nvme$subsystem", 00:09:04.085 "trtype": "$TEST_TRANSPORT", 00:09:04.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.085 "adrfam": "ipv4", 00:09:04.085 "trsvcid": "$NVMF_PORT", 00:09:04.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.085 "hdgst": ${hdgst:-false}, 00:09:04.085 "ddgst": ${ddgst:-false} 00:09:04.085 }, 00:09:04.085 "method": "bdev_nvme_attach_controller" 00:09:04.085 } 00:09:04.085 EOF 00:09:04.086 )") 00:09:04.086 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:04.086 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:04.086 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:04.086 14:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:04.086 "params": { 00:09:04.086 "name": "Nvme0", 00:09:04.086 "trtype": "tcp", 00:09:04.086 "traddr": "10.0.0.2", 00:09:04.086 "adrfam": "ipv4", 00:09:04.086 "trsvcid": "4420", 00:09:04.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.086 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:04.086 "hdgst": false, 00:09:04.086 "ddgst": false 00:09:04.086 }, 00:09:04.086 "method": "bdev_nvme_attach_controller" 00:09:04.086 }' 00:09:04.086 [2024-11-20 14:29:11.082188] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:09:04.086 [2024-11-20 14:29:11.082242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702170 ] 00:09:04.344 [2024-11-20 14:29:11.160405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.344 [2024-11-20 14:29:11.196394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.344 Running I/O for 1 seconds... 00:09:05.720 2314.00 IOPS, 144.62 MiB/s 00:09:05.720 Latency(us) 00:09:05.720 [2024-11-20T13:29:12.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.720 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:05.720 Verification LBA range: start 0x0 length 0x400 00:09:05.720 Nvme0n1 : 1.01 2350.98 146.94 0.00 0.00 26591.26 2594.13 28180.48 00:09:05.720 [2024-11-20T13:29:12.780Z] =================================================================================================================== 00:09:05.720 [2024-11-20T13:29:12.780Z] Total : 2350.98 146.94 0.00 0.00 26591.26 2594.13 28180.48 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.720 rmmod nvme_tcp 00:09:05.720 rmmod nvme_fabrics 00:09:05.720 rmmod nvme_keyring 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3701442 ']' 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3701442 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3701442 ']' 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3701442 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701442 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701442' 00:09:05.720 killing process with pid 3701442 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3701442 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3701442 00:09:05.720 [2024-11-20 14:29:12.713290] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.720 14:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:08.286 00:09:08.286 real 0m12.217s 00:09:08.286 user 0m21.433s 00:09:08.286 sys 0m4.990s 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.286 ************************************ 00:09:08.286 END TEST nvmf_host_management 00:09:08.286 ************************************ 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.286 ************************************ 00:09:08.286 START TEST nvmf_lvol 00:09:08.286 ************************************ 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:08.286 * Looking for test storage... 00:09:08.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.286 --rc genhtml_branch_coverage=1 00:09:08.286 --rc genhtml_function_coverage=1 00:09:08.286 --rc genhtml_legend=1 00:09:08.286 --rc geninfo_all_blocks=1 00:09:08.286 --rc geninfo_unexecuted_blocks=1 00:09:08.286 00:09:08.286 ' 00:09:08.286 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.286 --rc genhtml_branch_coverage=1 00:09:08.286 --rc genhtml_function_coverage=1 00:09:08.286 --rc genhtml_legend=1 00:09:08.287 --rc geninfo_all_blocks=1 00:09:08.287 --rc geninfo_unexecuted_blocks=1 00:09:08.287 00:09:08.287 ' 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.287 --rc genhtml_branch_coverage=1 00:09:08.287 --rc genhtml_function_coverage=1 00:09:08.287 --rc genhtml_legend=1 00:09:08.287 --rc geninfo_all_blocks=1 00:09:08.287 --rc geninfo_unexecuted_blocks=1 00:09:08.287 00:09:08.287 ' 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.287 --rc genhtml_branch_coverage=1 00:09:08.287 --rc genhtml_function_coverage=1 00:09:08.287 --rc genhtml_legend=1 00:09:08.287 --rc geninfo_all_blocks=1 00:09:08.287 --rc geninfo_unexecuted_blocks=1 00:09:08.287 00:09:08.287 ' 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.287 14:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:13.562 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:13.562 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:13.562 Found net devices under 0000:31:00.0: cvl_0_0 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.562 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:13.563 Found net devices under 0000:31:00.1: cvl_0_1 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.563 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.822 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.822 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.822 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.822 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:09:13.822 00:09:13.822 --- 10.0.0.2 ping statistics --- 00:09:13.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.822 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:09:13.822 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:09:13.823 00:09:13.823 --- 10.0.0.1 ping statistics --- 00:09:13.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.823 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3706929 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3706929 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3706929 ']' 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.823 14:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.823 [2024-11-20 14:29:20.719827] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:09:13.823 [2024-11-20 14:29:20.719892] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.823 [2024-11-20 14:29:20.812519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:13.823 [2024-11-20 14:29:20.865376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.823 [2024-11-20 14:29:20.865433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.823 [2024-11-20 14:29:20.865441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.823 [2024-11-20 14:29:20.865449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.823 [2024-11-20 14:29:20.865456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.823 [2024-11-20 14:29:20.867272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.823 [2024-11-20 14:29:20.867382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.823 [2024-11-20 14:29:20.867383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.760 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.760 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:14.760 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.760 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.760 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:14.760 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.760 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:14.760 [2024-11-20 14:29:21.671110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.760 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.018 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:15.018 14:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.018 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:15.018 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:15.277 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:15.535 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b08c0e73-f98d-4de5-b257-cb4e807b0aa1 00:09:15.535 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b08c0e73-f98d-4de5-b257-cb4e807b0aa1 lvol 20 00:09:15.535 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d012273e-805a-4110-bee0-8e3d2e08be66 00:09:15.535 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.795 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d012273e-805a-4110-bee0-8e3d2e08be66 00:09:15.795 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.053 [2024-11-20 14:29:22.982008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.053 14:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.312 14:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3707574 00:09:16.312 14:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:16.312 14:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:17.250 14:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d012273e-805a-4110-bee0-8e3d2e08be66 MY_SNAPSHOT 00:09:17.508 14:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=37329e82-f892-450b-8bef-9ce25d22a4a5 00:09:17.508 14:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d012273e-805a-4110-bee0-8e3d2e08be66 30 00:09:17.508 14:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 37329e82-f892-450b-8bef-9ce25d22a4a5 MY_CLONE 00:09:17.767 14:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ea83a795-bc99-42a0-98e2-2ff30c3ed185 00:09:17.767 14:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ea83a795-bc99-42a0-98e2-2ff30c3ed185 00:09:18.026 14:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3707574 00:09:28.010 Initializing NVMe Controllers 00:09:28.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:28.010 Controller IO queue size 128, less than required. 00:09:28.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:28.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:28.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:28.010 Initialization complete. Launching workers. 00:09:28.010 ======================================================== 00:09:28.010 Latency(us) 00:09:28.010 Device Information : IOPS MiB/s Average min max 00:09:28.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17324.10 67.67 7391.02 785.51 52958.27 00:09:28.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16713.20 65.29 7659.85 1356.83 41454.55 00:09:28.010 ======================================================== 00:09:28.010 Total : 34037.30 132.96 7523.02 785.51 52958.27 00:09:28.010 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d012273e-805a-4110-bee0-8e3d2e08be66 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b08c0e73-f98d-4de5-b257-cb4e807b0aa1 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.010 rmmod nvme_tcp 00:09:28.010 rmmod nvme_fabrics 00:09:28.010 rmmod nvme_keyring 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3706929 ']' 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3706929 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3706929 ']' 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3706929 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.010 14:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3706929 00:09:28.010 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3706929' 00:09:28.011 killing process with pid 3706929 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3706929 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3706929 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.011 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.391 00:09:29.391 real 0m21.363s 00:09:29.391 user 1m1.806s 00:09:29.391 sys 0m7.089s 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:29.391 ************************************ 00:09:29.391 END TEST nvmf_lvol 00:09:29.391 ************************************ 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.391 ************************************ 00:09:29.391 START TEST nvmf_lvs_grow 00:09:29.391 ************************************ 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:29.391 * Looking for test storage... 00:09:29.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:29.391 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.392 --rc genhtml_branch_coverage=1 00:09:29.392 --rc genhtml_function_coverage=1 00:09:29.392 --rc genhtml_legend=1 00:09:29.392 --rc geninfo_all_blocks=1 00:09:29.392 --rc geninfo_unexecuted_blocks=1 00:09:29.392 00:09:29.392 ' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.392 --rc genhtml_branch_coverage=1 00:09:29.392 --rc genhtml_function_coverage=1 00:09:29.392 --rc genhtml_legend=1 00:09:29.392 --rc geninfo_all_blocks=1 00:09:29.392 --rc geninfo_unexecuted_blocks=1 00:09:29.392 00:09:29.392 ' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.392 --rc genhtml_branch_coverage=1 00:09:29.392 --rc genhtml_function_coverage=1 00:09:29.392 --rc genhtml_legend=1 00:09:29.392 --rc geninfo_all_blocks=1 00:09:29.392 --rc geninfo_unexecuted_blocks=1 00:09:29.392 00:09:29.392 ' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.392 --rc genhtml_branch_coverage=1 00:09:29.392 --rc genhtml_function_coverage=1 00:09:29.392 --rc genhtml_legend=1 00:09:29.392 --rc geninfo_all_blocks=1 00:09:29.392 --rc geninfo_unexecuted_blocks=1 00:09:29.392 00:09:29.392 ' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.392 14:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:34.664 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.664 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.664 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.664 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.664 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.664 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.664 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.664 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:34.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:34.923 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:34.923 Found net devices under 0000:31:00.0: cvl_0_0 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:34.923 Found net devices under 0000:31:00.1: cvl_0_1 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.923 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:09:34.924 00:09:34.924 --- 10.0.0.2 ping statistics --- 00:09:34.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.924 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:09:34.924 00:09:34.924 --- 10.0.0.1 ping statistics --- 00:09:34.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.924 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3714279 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3714279 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3714279 ']' 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.924 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:35.182 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:35.182 [2024-11-20 14:29:42.019417] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:09:35.182 [2024-11-20 14:29:42.019464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.182 [2024-11-20 14:29:42.090361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.182 [2024-11-20 14:29:42.119395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.182 [2024-11-20 14:29:42.119420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.182 [2024-11-20 14:29:42.119426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.182 [2024-11-20 14:29:42.119431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.182 [2024-11-20 14:29:42.119435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.182 [2024-11-20 14:29:42.119906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.182 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.182 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:35.182 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.182 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.182 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:35.182 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.182 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:35.440 [2024-11-20 14:29:42.359997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:35.440 ************************************ 00:09:35.440 START TEST lvs_grow_clean 00:09:35.440 ************************************ 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:35.440 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:35.441 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:35.441 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.441 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.441 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.701 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:35.701 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:35.701 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=21ede826-284a-486f-9b8b-5c95d611c053 00:09:35.701 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:35.701 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:35.961 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:35.961 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:35.961 14:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21ede826-284a-486f-9b8b-5c95d611c053 lvol 150 00:09:36.221 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3748d44e-72ad-445f-b0ea-61df01ea09af 00:09:36.222 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:36.222 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:36.222 [2024-11-20 14:29:43.192790] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:36.222 [2024-11-20 14:29:43.192835] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:36.222 true 00:09:36.222 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:36.222 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:36.511 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:36.511 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:36.511 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3748d44e-72ad-445f-b0ea-61df01ea09af 00:09:36.823 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:36.823 [2024-11-20 14:29:43.810609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.823 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3714886 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3714886 /var/tmp/bdevperf.sock 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3714886 ']' 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:37.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.136 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:37.136 [2024-11-20 14:29:44.014775] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:09:37.136 [2024-11-20 14:29:44.014827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714886 ] 00:09:37.136 [2024-11-20 14:29:44.091888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.136 [2024-11-20 14:29:44.127805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.074 14:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.074 14:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:38.074 14:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:38.334 Nvme0n1 00:09:38.334 14:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:38.334 [ 00:09:38.334 { 00:09:38.334 "name": "Nvme0n1", 00:09:38.334 "aliases": [ 00:09:38.334 "3748d44e-72ad-445f-b0ea-61df01ea09af" 00:09:38.334 ], 00:09:38.334 "product_name": "NVMe disk", 00:09:38.334 "block_size": 4096, 00:09:38.334 "num_blocks": 38912, 00:09:38.334 "uuid": "3748d44e-72ad-445f-b0ea-61df01ea09af", 00:09:38.334 "numa_id": 0, 00:09:38.334 "assigned_rate_limits": { 00:09:38.334 "rw_ios_per_sec": 0, 00:09:38.334 "rw_mbytes_per_sec": 0, 00:09:38.334 "r_mbytes_per_sec": 0, 00:09:38.334 "w_mbytes_per_sec": 0 00:09:38.334 }, 00:09:38.334 "claimed": false, 00:09:38.334 "zoned": false, 00:09:38.334 "supported_io_types": { 00:09:38.334 "read": true, 00:09:38.334 "write": true, 00:09:38.334 "unmap": true, 00:09:38.334 "flush": true, 00:09:38.334 "reset": true, 00:09:38.334 "nvme_admin": true, 00:09:38.334 "nvme_io": true, 00:09:38.334 "nvme_io_md": false, 00:09:38.334 "write_zeroes": true, 00:09:38.334 "zcopy": false, 00:09:38.334 "get_zone_info": false, 00:09:38.334 "zone_management": false, 00:09:38.334 "zone_append": false, 00:09:38.334 "compare": true, 00:09:38.334 "compare_and_write": true, 00:09:38.334 "abort": true, 00:09:38.334 "seek_hole": false, 00:09:38.334 "seek_data": false, 00:09:38.334 "copy": true, 00:09:38.334 "nvme_iov_md": false 00:09:38.334 }, 00:09:38.334 "memory_domains": [ 00:09:38.334 { 00:09:38.334 "dma_device_id": "system", 00:09:38.334 "dma_device_type": 1 00:09:38.334 } 00:09:38.334 ], 00:09:38.334 "driver_specific": { 00:09:38.334 "nvme": [ 00:09:38.334 { 00:09:38.334 "trid": { 00:09:38.334 "trtype": "TCP", 00:09:38.334 "adrfam": "IPv4", 00:09:38.334 "traddr": "10.0.0.2", 00:09:38.334 "trsvcid": "4420", 00:09:38.334 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:38.334 }, 00:09:38.334 "ctrlr_data": { 00:09:38.334 "cntlid": 1, 00:09:38.334 "vendor_id": "0x8086", 00:09:38.334 "model_number": "SPDK bdev Controller", 00:09:38.334 "serial_number": "SPDK0", 00:09:38.334 "firmware_revision": "25.01", 00:09:38.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:38.334 "oacs": { 00:09:38.334 "security": 0, 00:09:38.334 "format": 0, 00:09:38.334 "firmware": 0, 00:09:38.334 "ns_manage": 0 00:09:38.334 }, 00:09:38.334 "multi_ctrlr": true, 00:09:38.334 "ana_reporting": false 00:09:38.334 }, 00:09:38.334 "vs": { 00:09:38.334 "nvme_version": "1.3" 00:09:38.334 }, 00:09:38.334 "ns_data": { 00:09:38.334 "id": 1, 00:09:38.334 "can_share": true 00:09:38.334 } 00:09:38.334 } 00:09:38.334 ], 00:09:38.334 "mp_policy": "active_passive" 00:09:38.334 } 00:09:38.334 } 00:09:38.334 ] 00:09:38.334 14:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3715095 00:09:38.334 14:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:38.334 14:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:38.593 Running I/O for 10 seconds... 00:09:39.530 Latency(us) 00:09:39.530 [2024-11-20T13:29:46.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.530 Nvme0n1 : 1.00 24723.00 96.57 0.00 0.00 0.00 0.00 0.00 00:09:39.530 [2024-11-20T13:29:46.590Z] =================================================================================================================== 00:09:39.530 [2024-11-20T13:29:46.590Z] Total : 24723.00 96.57 0.00 0.00 0.00 0.00 0.00 00:09:39.530 00:09:40.467 14:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:40.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.467 Nvme0n1 : 2.00 24645.50 96.27 0.00 0.00 0.00 0.00 0.00 00:09:40.467 [2024-11-20T13:29:47.527Z] =================================================================================================================== 00:09:40.467 [2024-11-20T13:29:47.527Z] Total : 24645.50 96.27 0.00 0.00 0.00 0.00 0.00 00:09:40.467 00:09:40.467 true 00:09:40.467 14:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:40.467 14:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:40.725 14:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:40.725 14:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:40.725 14:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3715095 00:09:41.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.663 Nvme0n1 : 3.00 24633.00 96.22 0.00 0.00 0.00 0.00 0.00 00:09:41.663 [2024-11-20T13:29:48.723Z] =================================================================================================================== 00:09:41.663 [2024-11-20T13:29:48.723Z] Total : 24633.00 96.22 0.00 0.00 0.00 0.00 0.00 00:09:41.663 00:09:42.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.609 Nvme0n1 : 4.00 24644.75 96.27 0.00 0.00 0.00 0.00 0.00 00:09:42.609 [2024-11-20T13:29:49.669Z] =================================================================================================================== 00:09:42.609 [2024-11-20T13:29:49.669Z] Total : 24644.75 96.27 0.00 0.00 0.00 0.00 0.00 00:09:42.609 00:09:43.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.546 Nvme0n1 : 5.00 24650.20 96.29 0.00 0.00 0.00 0.00 0.00 00:09:43.546 [2024-11-20T13:29:50.606Z] =================================================================================================================== 00:09:43.546 [2024-11-20T13:29:50.606Z] Total : 24650.20 96.29 0.00 0.00 0.00 0.00 0.00 00:09:43.546 00:09:44.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.487 Nvme0n1 : 6.00 24664.50 96.35 0.00 0.00 0.00 0.00 0.00 00:09:44.487 [2024-11-20T13:29:51.547Z] =================================================================================================================== 00:09:44.487 [2024-11-20T13:29:51.547Z] Total : 24664.50 96.35 0.00 0.00 0.00 0.00 0.00 00:09:44.487 00:09:45.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.427 Nvme0n1 : 7.00 24678.14 96.40 0.00 0.00 0.00 0.00 0.00 00:09:45.427 [2024-11-20T13:29:52.487Z] =================================================================================================================== 00:09:45.427 [2024-11-20T13:29:52.487Z] Total : 24678.14 96.40 0.00 0.00 0.00 0.00 0.00 00:09:45.427 00:09:46.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.809 Nvme0n1 : 8.00 24688.38 96.44 0.00 0.00 0.00 0.00 0.00 00:09:46.809 [2024-11-20T13:29:53.869Z] =================================================================================================================== 00:09:46.809 [2024-11-20T13:29:53.869Z] Total : 24688.38 96.44 0.00 0.00 0.00 0.00 0.00 00:09:46.809 00:09:47.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.378 Nvme0n1 : 9.00 24698.11 96.48 0.00 0.00 0.00 0.00 0.00 00:09:47.378 [2024-11-20T13:29:54.438Z] =================================================================================================================== 00:09:47.378 [2024-11-20T13:29:54.438Z] Total : 24698.11 96.48 0.00 0.00 0.00 0.00 0.00 00:09:47.378 00:09:48.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.756 Nvme0n1 : 10.00 24705.90 96.51 0.00 0.00 0.00 0.00 0.00 00:09:48.756 [2024-11-20T13:29:55.816Z] =================================================================================================================== 00:09:48.756 [2024-11-20T13:29:55.816Z] Total : 24705.90 96.51 0.00 0.00 0.00 0.00 0.00 00:09:48.756 00:09:48.756 00:09:48.756 Latency(us) 00:09:48.756 [2024-11-20T13:29:55.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.756 Nvme0n1 : 10.01 24706.29 96.51 0.00 0.00 5176.98 2362.03 9830.40 00:09:48.756 [2024-11-20T13:29:55.816Z] =================================================================================================================== 00:09:48.756 [2024-11-20T13:29:55.816Z] Total : 24706.29 96.51 0.00 0.00 5176.98 2362.03 9830.40 00:09:48.756 { 00:09:48.756 "results": [ 00:09:48.756 { 00:09:48.756 "job": "Nvme0n1", 00:09:48.756 "core_mask": "0x2", 00:09:48.756 "workload": "randwrite", 00:09:48.756 "status": "finished", 00:09:48.756 "queue_depth": 128, 00:09:48.756 "io_size": 4096, 00:09:48.756 "runtime": 10.005021, 00:09:48.756 "iops": 24706.294969295916, 00:09:48.756 "mibps": 96.50896472381217, 00:09:48.756 "io_failed": 0, 00:09:48.756 "io_timeout": 0, 00:09:48.756 "avg_latency_us": 5176.977577623419, 00:09:48.756 "min_latency_us": 2362.0266666666666, 00:09:48.756 "max_latency_us": 9830.4 00:09:48.756 } 00:09:48.756 ], 00:09:48.756 "core_count": 1 00:09:48.756 } 00:09:48.756 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3714886 00:09:48.756 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3714886 ']' 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3714886 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3714886 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3714886' 00:09:48.757 killing process with pid 3714886 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3714886 00:09:48.757 Received shutdown signal, test time was about 10.000000 seconds 00:09:48.757 00:09:48.757 Latency(us) 00:09:48.757 [2024-11-20T13:29:55.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.757 [2024-11-20T13:29:55.817Z] =================================================================================================================== 00:09:48.757 [2024-11-20T13:29:55.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3714886 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.757 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.015 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:49.015 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:49.275 [2024-11-20 14:29:56.231682] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:49.275 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:49.534 request: 00:09:49.534 { 00:09:49.534 "uuid": "21ede826-284a-486f-9b8b-5c95d611c053", 00:09:49.534 "method": "bdev_lvol_get_lvstores", 00:09:49.534 "req_id": 1 00:09:49.534 } 00:09:49.534 Got JSON-RPC error response 00:09:49.534 response: 00:09:49.534 { 00:09:49.534 "code": -19, 00:09:49.534 "message": "No such device" 00:09:49.534 } 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:49.534 aio_bdev 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3748d44e-72ad-445f-b0ea-61df01ea09af 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3748d44e-72ad-445f-b0ea-61df01ea09af 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.534 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:49.793 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3748d44e-72ad-445f-b0ea-61df01ea09af -t 2000 00:09:50.052 [ 00:09:50.052 { 00:09:50.052 "name": "3748d44e-72ad-445f-b0ea-61df01ea09af", 00:09:50.052 "aliases": [ 00:09:50.052 "lvs/lvol" 00:09:50.052 ], 00:09:50.052 "product_name": "Logical Volume", 00:09:50.053 "block_size": 4096, 00:09:50.053 "num_blocks": 38912, 00:09:50.053 "uuid": "3748d44e-72ad-445f-b0ea-61df01ea09af", 00:09:50.053 "assigned_rate_limits": { 00:09:50.053 "rw_ios_per_sec": 0, 00:09:50.053 "rw_mbytes_per_sec": 0, 00:09:50.053 "r_mbytes_per_sec": 0, 00:09:50.053 "w_mbytes_per_sec": 0 00:09:50.053 }, 00:09:50.053 "claimed": false, 00:09:50.053 "zoned": false, 00:09:50.053 "supported_io_types": { 00:09:50.053 "read": true, 00:09:50.053 "write": true, 00:09:50.053 "unmap": true, 00:09:50.053 "flush": false, 00:09:50.053 "reset": true, 00:09:50.053 "nvme_admin": false, 00:09:50.053 "nvme_io": false, 00:09:50.053 "nvme_io_md": false, 00:09:50.053 "write_zeroes": true, 00:09:50.053 "zcopy": false, 00:09:50.053 "get_zone_info": false, 00:09:50.053 "zone_management": false, 00:09:50.053 "zone_append": false, 00:09:50.053 "compare": false, 00:09:50.053 "compare_and_write": false, 00:09:50.053 "abort": false, 00:09:50.053 "seek_hole": true, 00:09:50.053 "seek_data": true, 00:09:50.053 "copy": false, 00:09:50.053 "nvme_iov_md": false 00:09:50.053 }, 00:09:50.053 "driver_specific": { 00:09:50.053 "lvol": { 00:09:50.053 "lvol_store_uuid": "21ede826-284a-486f-9b8b-5c95d611c053", 00:09:50.053 "base_bdev": "aio_bdev", 00:09:50.053 "thin_provision": false, 00:09:50.053 "num_allocated_clusters": 38, 00:09:50.053 "snapshot": false, 00:09:50.053 "clone": false, 00:09:50.053 "esnap_clone": false 00:09:50.053 } 00:09:50.053 } 00:09:50.053 } 00:09:50.053 ] 00:09:50.053 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:50.053 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:50.053 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:50.053 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:50.053 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:50.053 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:50.312 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:50.312 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3748d44e-72ad-445f-b0ea-61df01ea09af 00:09:50.571 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21ede826-284a-486f-9b8b-5c95d611c053 00:09:50.571 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:50.830 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:50.830 00:09:50.830 real 0m15.336s 00:09:50.830 user 0m14.987s 00:09:50.830 sys 0m1.204s 00:09:50.830 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.830 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:50.830 ************************************ 00:09:50.830 END TEST lvs_grow_clean 00:09:50.830 ************************************ 00:09:50.830 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:50.830 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.830 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.830 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:50.830 ************************************ 00:09:50.830 START TEST lvs_grow_dirty 00:09:50.830 ************************************ 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:50.831 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:51.090 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:51.090 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:51.090 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6356332c-9b41-49b3-897d-35d325143227 00:09:51.090 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:09:51.090 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:51.348 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:51.348 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:51.348 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6356332c-9b41-49b3-897d-35d325143227 lvol 150 00:09:51.608 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a6395e7e-f697-4ff6-b2d9-f97be3b38b4b 00:09:51.608 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:51.608 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:51.608 [2024-11-20 14:29:58.589864] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:51.608 [2024-11-20 14:29:58.589909] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:51.608 true 00:09:51.608 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:09:51.609 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:51.867 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:51.867 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:51.867 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6395e7e-f697-4ff6-b2d9-f97be3b38b4b 00:09:52.126 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:52.385 [2024-11-20 14:29:59.207682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.385 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.385 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3718184 00:09:52.385 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:52.385 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3718184 /var/tmp/bdevperf.sock 00:09:52.385 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3718184 ']' 00:09:52.385 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:52.385 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.385 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:52.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:52.385 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:52.386 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.386 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:52.386 [2024-11-20 14:29:59.412490] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:09:52.386 [2024-11-20 14:29:59.412543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718184 ] 00:09:52.645 [2024-11-20 14:29:59.478272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.645 [2024-11-20 14:29:59.508275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.645 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.645 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:52.645 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:52.905 Nvme0n1 00:09:52.905 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:53.165 [ 00:09:53.165 { 00:09:53.165 "name": "Nvme0n1", 00:09:53.165 "aliases": [ 00:09:53.165 "a6395e7e-f697-4ff6-b2d9-f97be3b38b4b" 00:09:53.165 ], 00:09:53.165 "product_name": "NVMe disk", 00:09:53.165 "block_size": 4096, 00:09:53.165 "num_blocks": 38912, 00:09:53.165 "uuid": "a6395e7e-f697-4ff6-b2d9-f97be3b38b4b", 00:09:53.165 "numa_id": 0, 00:09:53.165 "assigned_rate_limits": { 00:09:53.165 "rw_ios_per_sec": 0, 00:09:53.165 "rw_mbytes_per_sec": 0, 00:09:53.165 "r_mbytes_per_sec": 0, 00:09:53.165 "w_mbytes_per_sec": 0 00:09:53.165 }, 00:09:53.165 "claimed": false, 00:09:53.165 "zoned": false, 00:09:53.165 "supported_io_types": { 00:09:53.165 "read": true, 00:09:53.165 "write": true, 00:09:53.165 "unmap": true, 00:09:53.165 "flush": true, 00:09:53.165 "reset": true, 00:09:53.165 "nvme_admin": true, 00:09:53.165 "nvme_io": true, 00:09:53.165 "nvme_io_md": false, 00:09:53.165 "write_zeroes": true, 00:09:53.165 "zcopy": false, 00:09:53.165 "get_zone_info": false, 00:09:53.165 "zone_management": false, 00:09:53.165 "zone_append": false, 00:09:53.165 "compare": true, 00:09:53.165 "compare_and_write": true, 00:09:53.165 "abort": true, 00:09:53.165 "seek_hole": false, 00:09:53.165 "seek_data": false, 00:09:53.165 "copy": true, 00:09:53.165 "nvme_iov_md": false 00:09:53.165 }, 00:09:53.165 "memory_domains": [ 00:09:53.165 { 00:09:53.165 "dma_device_id": "system", 00:09:53.165 "dma_device_type": 1 00:09:53.165 } 00:09:53.165 ], 00:09:53.165 "driver_specific": { 00:09:53.165 "nvme": [ 00:09:53.165 { 00:09:53.165 "trid": { 00:09:53.165 "trtype": "TCP", 00:09:53.165 "adrfam": "IPv4", 00:09:53.165 "traddr": "10.0.0.2", 00:09:53.165 "trsvcid": "4420", 00:09:53.165 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:53.165 }, 00:09:53.165 "ctrlr_data": { 00:09:53.165 "cntlid": 1, 00:09:53.165 "vendor_id": "0x8086", 00:09:53.165 "model_number": "SPDK bdev Controller", 00:09:53.165 "serial_number": "SPDK0", 00:09:53.165 "firmware_revision": "25.01", 00:09:53.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:53.165 "oacs": { 00:09:53.165 "security": 0, 00:09:53.165 "format": 0, 00:09:53.165 "firmware": 0, 00:09:53.165 "ns_manage": 0 00:09:53.165 }, 00:09:53.165 "multi_ctrlr": true, 00:09:53.165 "ana_reporting": false 00:09:53.165 }, 00:09:53.165 "vs": { 00:09:53.165 "nvme_version": "1.3" 00:09:53.165 }, 00:09:53.165 "ns_data": { 00:09:53.165 "id": 1, 00:09:53.165 "can_share": true 00:09:53.165 } 00:09:53.165 } 00:09:53.165 ], 00:09:53.165 "mp_policy": "active_passive" 00:09:53.165 } 00:09:53.165 } 00:09:53.165 ] 00:09:53.165 14:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3718429 00:09:53.165 14:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:53.165 14:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:53.165 Running I/O for 10 seconds... 00:09:54.541 Latency(us) 00:09:54.541 [2024-11-20T13:30:01.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.541 Nvme0n1 : 1.00 24961.00 97.50 0.00 0.00 0.00 0.00 0.00 00:09:54.541 [2024-11-20T13:30:01.601Z] =================================================================================================================== 00:09:54.541 [2024-11-20T13:30:01.601Z] Total : 24961.00 97.50 0.00 0.00 0.00 0.00 0.00 00:09:54.541 00:09:55.109 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6356332c-9b41-49b3-897d-35d325143227 00:09:55.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.368 Nvme0n1 : 2.00 25088.00 98.00 0.00 0.00 0.00 0.00 0.00 00:09:55.368 [2024-11-20T13:30:02.428Z] =================================================================================================================== 00:09:55.368 [2024-11-20T13:30:02.428Z] Total : 25088.00 98.00 0.00 0.00 0.00 0.00 0.00 00:09:55.368 00:09:55.368 true 00:09:55.368 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:55.368 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:09:55.627 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:55.627 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:55.627 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3718429 00:09:56.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.194 Nvme0n1 : 3.00 25150.67 98.24 0.00 0.00 0.00 0.00 0.00 00:09:56.194 [2024-11-20T13:30:03.254Z] =================================================================================================================== 00:09:56.194 [2024-11-20T13:30:03.254Z] Total : 25150.67 98.24 0.00 0.00 0.00 0.00 0.00 00:09:56.194 00:09:57.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.569 Nvme0n1 : 4.00 25202.25 98.45 0.00 0.00 0.00 0.00 0.00 00:09:57.569 [2024-11-20T13:30:04.629Z] =================================================================================================================== 00:09:57.569 [2024-11-20T13:30:04.629Z] Total : 25202.25 98.45 0.00 0.00 0.00 0.00 0.00 00:09:57.569 00:09:58.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.506 Nvme0n1 : 5.00 25239.60 98.59 0.00 0.00 0.00 0.00 0.00 00:09:58.506 [2024-11-20T13:30:05.566Z] =================================================================================================================== 00:09:58.506 [2024-11-20T13:30:05.566Z] Total : 25239.60 98.59 0.00 0.00 0.00 0.00 0.00 00:09:58.506 00:09:59.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.496 Nvme0n1 : 6.00 25267.50 98.70 0.00 0.00 0.00 0.00 0.00 00:09:59.496 [2024-11-20T13:30:06.556Z] =================================================================================================================== 00:09:59.496 [2024-11-20T13:30:06.556Z] Total : 25267.50 98.70 0.00 0.00 0.00 0.00 0.00 00:09:59.496 00:10:00.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.434 Nvme0n1 : 7.00 25283.00 98.76 0.00 0.00 0.00 0.00 0.00 00:10:00.434 [2024-11-20T13:30:07.494Z] =================================================================================================================== 00:10:00.434 [2024-11-20T13:30:07.494Z] Total : 25283.00 98.76 0.00 0.00 0.00 0.00 0.00 00:10:00.434 00:10:01.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.373 Nvme0n1 : 8.00 25302.25 98.84 0.00 0.00 0.00 0.00 0.00 00:10:01.373 [2024-11-20T13:30:08.433Z] =================================================================================================================== 00:10:01.373 [2024-11-20T13:30:08.433Z] Total : 25302.25 98.84 0.00 0.00 0.00 0.00 0.00 00:10:01.373 00:10:02.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.310 Nvme0n1 : 9.00 25313.44 98.88 0.00 0.00 0.00 0.00 0.00 00:10:02.310 [2024-11-20T13:30:09.370Z] =================================================================================================================== 00:10:02.310 [2024-11-20T13:30:09.370Z] Total : 25313.44 98.88 0.00 0.00 0.00 0.00 0.00 00:10:02.310 00:10:03.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.249 Nvme0n1 : 10.00 25328.70 98.94 0.00 0.00 0.00 0.00 0.00 00:10:03.249 [2024-11-20T13:30:10.309Z] =================================================================================================================== 00:10:03.249 [2024-11-20T13:30:10.309Z] Total : 25328.70 98.94 0.00 0.00 0.00 0.00 0.00 00:10:03.249 00:10:03.249 00:10:03.249 Latency(us) 00:10:03.249 [2024-11-20T13:30:10.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.249 Nvme0n1 : 10.00 25326.08 98.93 0.00 0.00 5050.80 1570.13 9229.65 00:10:03.249 [2024-11-20T13:30:10.309Z] =================================================================================================================== 00:10:03.249 [2024-11-20T13:30:10.309Z] Total : 25326.08 98.93 0.00 0.00 5050.80 1570.13 9229.65 00:10:03.249 { 00:10:03.249 "results": [ 00:10:03.249 { 00:10:03.249 "job": "Nvme0n1", 00:10:03.249 "core_mask": "0x2", 00:10:03.249 "workload": "randwrite", 00:10:03.249 "status": "finished", 00:10:03.249 "queue_depth": 128, 00:10:03.249 "io_size": 4096, 00:10:03.249 "runtime": 10.003601, 00:10:03.249 "iops": 25326.080078563708, 00:10:03.249 "mibps": 98.93000030688948, 00:10:03.249 "io_failed": 0, 00:10:03.249 "io_timeout": 0, 00:10:03.249 "avg_latency_us": 5050.800767101371, 00:10:03.249 "min_latency_us": 1570.1333333333334, 00:10:03.249 "max_latency_us": 9229.653333333334 00:10:03.249 } 00:10:03.249 ], 00:10:03.249 "core_count": 1 00:10:03.249 } 00:10:03.249 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3718184 00:10:03.249 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3718184 ']' 00:10:03.249 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3718184 00:10:03.249 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:03.249 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.249 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3718184 00:10:03.249 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:03.249 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:03.250 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3718184' 00:10:03.250 killing process with pid 3718184 00:10:03.250 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3718184 00:10:03.250 Received shutdown signal, test time was about 10.000000 seconds 00:10:03.250 00:10:03.250 Latency(us) 00:10:03.250 [2024-11-20T13:30:10.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.250 [2024-11-20T13:30:10.310Z] =================================================================================================================== 00:10:03.250 [2024-11-20T13:30:10.310Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:03.250 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3718184 00:10:03.509 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:03.509 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:03.770 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:10:03.770 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3714279 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3714279 00:10:04.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3714279 Killed "${NVMF_APP[@]}" "$@" 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3721321 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3721321 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3721321 ']' 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:04.030 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:04.031 [2024-11-20 14:30:10.939259] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:04.031 [2024-11-20 14:30:10.939314] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.031 [2024-11-20 14:30:11.009745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.031 [2024-11-20 14:30:11.039121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.031 [2024-11-20 14:30:11.039148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.031 [2024-11-20 14:30:11.039153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.031 [2024-11-20 14:30:11.039158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.031 [2024-11-20 14:30:11.039162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.031 [2024-11-20 14:30:11.039638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:04.291 [2024-11-20 14:30:11.280777] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:04.291 [2024-11-20 14:30:11.280852] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:04.291 [2024-11-20 14:30:11.280875] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a6395e7e-f697-4ff6-b2d9-f97be3b38b4b 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6395e7e-f697-4ff6-b2d9-f97be3b38b4b 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.291 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:04.549 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6395e7e-f697-4ff6-b2d9-f97be3b38b4b -t 2000 00:10:04.549 [ 00:10:04.549 { 00:10:04.549 "name": "a6395e7e-f697-4ff6-b2d9-f97be3b38b4b", 00:10:04.549 "aliases": [ 00:10:04.549 "lvs/lvol" 00:10:04.549 ], 00:10:04.549 "product_name": "Logical Volume", 00:10:04.549 "block_size": 4096, 00:10:04.549 "num_blocks": 38912, 00:10:04.549 "uuid": "a6395e7e-f697-4ff6-b2d9-f97be3b38b4b", 00:10:04.549 "assigned_rate_limits": { 00:10:04.549 "rw_ios_per_sec": 0, 00:10:04.549 "rw_mbytes_per_sec": 0, 00:10:04.549 "r_mbytes_per_sec": 0, 00:10:04.549 "w_mbytes_per_sec": 0 00:10:04.549 }, 00:10:04.549 "claimed": false, 00:10:04.549 "zoned": false, 00:10:04.549 "supported_io_types": { 00:10:04.549 "read": true, 00:10:04.549 "write": true, 00:10:04.549 "unmap": true, 00:10:04.549 "flush": false, 00:10:04.549 "reset": true, 00:10:04.549 "nvme_admin": false, 00:10:04.549 "nvme_io": false, 00:10:04.549 "nvme_io_md": false, 00:10:04.549 "write_zeroes": true, 00:10:04.549 "zcopy": false, 00:10:04.549 "get_zone_info": false, 00:10:04.549 "zone_management": false, 00:10:04.549 "zone_append": false, 00:10:04.549 "compare": false, 00:10:04.549 "compare_and_write": false, 00:10:04.549 "abort": false, 00:10:04.549 "seek_hole": true, 00:10:04.549 "seek_data": true, 00:10:04.549 "copy": false, 00:10:04.549 "nvme_iov_md": false 00:10:04.549 }, 00:10:04.549 "driver_specific": { 00:10:04.549 "lvol": { 00:10:04.549 "lvol_store_uuid": "6356332c-9b41-49b3-897d-35d325143227", 00:10:04.549 "base_bdev": "aio_bdev", 00:10:04.549 "thin_provision": false, 00:10:04.549 "num_allocated_clusters": 38, 00:10:04.549 "snapshot": false, 00:10:04.549 "clone": false, 00:10:04.549 "esnap_clone": false 00:10:04.549 } 00:10:04.549 } 00:10:04.549 } 00:10:04.549 ] 00:10:04.549 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:04.549 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:10:04.549 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:04.809 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:04.809 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:10:04.809 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:05.069 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:05.069 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:05.069 [2024-11-20 14:30:12.053266] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:05.069 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:10:05.330 request: 00:10:05.330 { 00:10:05.330 "uuid": "6356332c-9b41-49b3-897d-35d325143227", 00:10:05.330 "method": "bdev_lvol_get_lvstores", 00:10:05.330 "req_id": 1 00:10:05.330 } 00:10:05.330 Got JSON-RPC error response 00:10:05.330 response: 00:10:05.330 { 00:10:05.330 "code": -19, 00:10:05.330 "message": "No such device" 00:10:05.330 } 00:10:05.330 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:05.330 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:05.330 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:05.330 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:05.330 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:05.330 aio_bdev 00:10:05.330 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a6395e7e-f697-4ff6-b2d9-f97be3b38b4b 00:10:05.330 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6395e7e-f697-4ff6-b2d9-f97be3b38b4b 00:10:05.330 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.590 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:05.590 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.590 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.590 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:05.590 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6395e7e-f697-4ff6-b2d9-f97be3b38b4b -t 2000 00:10:05.850 [ 00:10:05.850 { 00:10:05.850 "name": "a6395e7e-f697-4ff6-b2d9-f97be3b38b4b", 00:10:05.850 "aliases": [ 00:10:05.850 "lvs/lvol" 00:10:05.850 ], 00:10:05.850 "product_name": "Logical Volume", 00:10:05.850 "block_size": 4096, 00:10:05.850 "num_blocks": 38912, 00:10:05.850 "uuid": "a6395e7e-f697-4ff6-b2d9-f97be3b38b4b", 00:10:05.850 "assigned_rate_limits": { 00:10:05.850 "rw_ios_per_sec": 0, 00:10:05.850 "rw_mbytes_per_sec": 0, 00:10:05.850 "r_mbytes_per_sec": 0, 00:10:05.850 "w_mbytes_per_sec": 0 00:10:05.850 }, 00:10:05.850 "claimed": false, 00:10:05.850 "zoned": false, 00:10:05.850 "supported_io_types": { 00:10:05.850 "read": true, 00:10:05.850 "write": true, 00:10:05.850 "unmap": true, 00:10:05.850 "flush": false, 00:10:05.850 "reset": true, 00:10:05.850 "nvme_admin": false, 00:10:05.850 "nvme_io": false, 00:10:05.850 "nvme_io_md": false, 00:10:05.850 "write_zeroes": true, 00:10:05.850 "zcopy": false, 00:10:05.850 "get_zone_info": false, 00:10:05.850 "zone_management": false, 00:10:05.850 "zone_append": false, 00:10:05.850 "compare": false, 00:10:05.850 "compare_and_write": false, 00:10:05.850 "abort": false, 00:10:05.850 "seek_hole": true, 00:10:05.850 "seek_data": true, 00:10:05.850 "copy": false, 00:10:05.850 "nvme_iov_md": false 00:10:05.850 }, 00:10:05.850 "driver_specific": { 00:10:05.850 "lvol": { 00:10:05.850 "lvol_store_uuid": "6356332c-9b41-49b3-897d-35d325143227", 00:10:05.850 "base_bdev": "aio_bdev", 00:10:05.850 "thin_provision": false, 00:10:05.850 "num_allocated_clusters": 38, 00:10:05.850 "snapshot": false, 00:10:05.850 "clone": false, 00:10:05.850 "esnap_clone": false 00:10:05.850 } 00:10:05.850 } 00:10:05.850 } 00:10:05.850 ] 00:10:05.850 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:05.850 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:10:05.850 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:05.850 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:05.850 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6356332c-9b41-49b3-897d-35d325143227 00:10:05.850 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:06.110 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:06.110 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6395e7e-f697-4ff6-b2d9-f97be3b38b4b 00:10:06.110 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6356332c-9b41-49b3-897d-35d325143227 00:10:06.370 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:06.629 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:06.629 00:10:06.629 real 0m15.765s 00:10:06.629 user 0m42.282s 00:10:06.629 sys 0m2.704s 00:10:06.629 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.629 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:06.629 ************************************ 00:10:06.629 END TEST lvs_grow_dirty 00:10:06.629 ************************************ 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:06.630 nvmf_trace.0 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.630 rmmod nvme_tcp 00:10:06.630 rmmod nvme_fabrics 00:10:06.630 rmmod nvme_keyring 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3721321 ']' 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3721321 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3721321 ']' 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3721321 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.630 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3721321 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3721321' 00:10:06.889 killing process with pid 3721321 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3721321 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3721321 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.889 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.430 00:10:09.430 real 0m39.611s 00:10:09.430 user 1m1.765s 00:10:09.430 sys 0m8.473s 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:09.430 ************************************ 00:10:09.430 END TEST nvmf_lvs_grow 00:10:09.430 ************************************ 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.430 ************************************ 00:10:09.430 START TEST nvmf_bdev_io_wait 00:10:09.430 ************************************ 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:09.430 * Looking for test storage... 00:10:09.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.430 14:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.430 --rc genhtml_branch_coverage=1 00:10:09.430 --rc genhtml_function_coverage=1 00:10:09.430 --rc genhtml_legend=1 00:10:09.430 --rc geninfo_all_blocks=1 00:10:09.430 --rc geninfo_unexecuted_blocks=1 00:10:09.430 00:10:09.430 ' 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.430 --rc genhtml_branch_coverage=1 00:10:09.430 --rc genhtml_function_coverage=1 00:10:09.430 --rc genhtml_legend=1 00:10:09.430 --rc geninfo_all_blocks=1 00:10:09.430 --rc geninfo_unexecuted_blocks=1 00:10:09.430 00:10:09.430 ' 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.430 --rc genhtml_branch_coverage=1 00:10:09.430 --rc genhtml_function_coverage=1 00:10:09.430 --rc genhtml_legend=1 00:10:09.430 --rc geninfo_all_blocks=1 00:10:09.430 --rc geninfo_unexecuted_blocks=1 00:10:09.430 00:10:09.430 ' 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.430 --rc genhtml_branch_coverage=1 00:10:09.430 --rc genhtml_function_coverage=1 00:10:09.430 --rc genhtml_legend=1 00:10:09.430 --rc geninfo_all_blocks=1 00:10:09.430 --rc geninfo_unexecuted_blocks=1 00:10:09.430 00:10:09.430 ' 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.430 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.431 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:14.714 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:14.714 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:14.714 Found net devices under 0000:31:00.0: cvl_0_0 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:14.714 Found net devices under 0000:31:00.1: cvl_0_1 00:10:14.714 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:10:14.715 00:10:14.715 --- 10.0.0.2 ping statistics --- 00:10:14.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.715 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:10:14.715 00:10:14.715 --- 10.0.0.1 ping statistics --- 00:10:14.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.715 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3726396 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3726396 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3726396 ']' 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.715 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:14.715 [2024-11-20 14:30:21.710641] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:14.715 [2024-11-20 14:30:21.710699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.975 [2024-11-20 14:30:21.795042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.975 [2024-11-20 14:30:21.838192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.975 [2024-11-20 14:30:21.838233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.975 [2024-11-20 14:30:21.838241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.975 [2024-11-20 14:30:21.838257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.975 [2024-11-20 14:30:21.838263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.975 [2024-11-20 14:30:21.839979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.975 [2024-11-20 14:30:21.840129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.975 [2024-11-20 14:30:21.840298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.975 [2024-11-20 14:30:21.840299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.544 [2024-11-20 14:30:22.574303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.544 Malloc0 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.544 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.805 [2024-11-20 14:30:22.618356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3726696 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3726697 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3726700 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3726702 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.805 { 00:10:15.805 "params": { 00:10:15.805 "name": "Nvme$subsystem", 00:10:15.805 "trtype": "$TEST_TRANSPORT", 00:10:15.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.805 "adrfam": "ipv4", 00:10:15.805 "trsvcid": "$NVMF_PORT", 00:10:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.805 "hdgst": ${hdgst:-false}, 00:10:15.805 "ddgst": ${ddgst:-false} 00:10:15.805 }, 00:10:15.805 "method": "bdev_nvme_attach_controller" 00:10:15.805 } 00:10:15.805 EOF 00:10:15.805 )") 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.805 { 00:10:15.805 "params": { 00:10:15.805 "name": "Nvme$subsystem", 00:10:15.805 "trtype": "$TEST_TRANSPORT", 00:10:15.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.805 "adrfam": "ipv4", 00:10:15.805 "trsvcid": "$NVMF_PORT", 00:10:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.805 "hdgst": ${hdgst:-false}, 00:10:15.805 "ddgst": ${ddgst:-false} 00:10:15.805 }, 00:10:15.805 "method": "bdev_nvme_attach_controller" 00:10:15.805 } 00:10:15.805 EOF 00:10:15.805 )") 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.805 { 00:10:15.805 "params": { 00:10:15.805 "name": "Nvme$subsystem", 00:10:15.805 "trtype": "$TEST_TRANSPORT", 00:10:15.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.805 "adrfam": "ipv4", 00:10:15.805 "trsvcid": "$NVMF_PORT", 00:10:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.805 "hdgst": ${hdgst:-false}, 00:10:15.805 "ddgst": ${ddgst:-false} 00:10:15.805 }, 00:10:15.805 "method": "bdev_nvme_attach_controller" 00:10:15.805 } 00:10:15.805 EOF 00:10:15.805 )") 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.805 { 00:10:15.805 "params": { 00:10:15.805 "name": "Nvme$subsystem", 00:10:15.805 "trtype": "$TEST_TRANSPORT", 00:10:15.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.805 "adrfam": "ipv4", 00:10:15.805 "trsvcid": "$NVMF_PORT", 00:10:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.805 "hdgst": ${hdgst:-false}, 00:10:15.805 "ddgst": ${ddgst:-false} 00:10:15.805 }, 00:10:15.805 "method": "bdev_nvme_attach_controller" 00:10:15.805 } 00:10:15.805 EOF 00:10:15.805 )") 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3726696 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.805 "params": { 00:10:15.805 "name": "Nvme1", 00:10:15.805 "trtype": "tcp", 00:10:15.805 "traddr": "10.0.0.2", 00:10:15.805 "adrfam": "ipv4", 00:10:15.805 "trsvcid": "4420", 00:10:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.805 "hdgst": false, 00:10:15.805 "ddgst": false 00:10:15.805 }, 00:10:15.805 "method": "bdev_nvme_attach_controller" 00:10:15.805 }' 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.805 "params": { 00:10:15.805 "name": "Nvme1", 00:10:15.805 "trtype": "tcp", 00:10:15.805 "traddr": "10.0.0.2", 00:10:15.805 "adrfam": "ipv4", 00:10:15.805 "trsvcid": "4420", 00:10:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.805 "hdgst": false, 00:10:15.805 "ddgst": false 00:10:15.805 }, 00:10:15.805 "method": "bdev_nvme_attach_controller" 00:10:15.805 }' 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.805 "params": { 00:10:15.805 "name": "Nvme1", 00:10:15.805 "trtype": "tcp", 00:10:15.805 "traddr": "10.0.0.2", 00:10:15.805 "adrfam": "ipv4", 00:10:15.805 "trsvcid": "4420", 00:10:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.805 "hdgst": false, 00:10:15.805 "ddgst": false 00:10:15.805 }, 00:10:15.805 "method": "bdev_nvme_attach_controller" 00:10:15.805 }' 00:10:15.805 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.806 "params": { 00:10:15.806 "name": "Nvme1", 00:10:15.806 "trtype": "tcp", 00:10:15.806 "traddr": "10.0.0.2", 00:10:15.806 "adrfam": "ipv4", 00:10:15.806 "trsvcid": "4420", 00:10:15.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.806 "hdgst": false, 00:10:15.806 "ddgst": false 00:10:15.806 }, 00:10:15.806 "method": "bdev_nvme_attach_controller" 00:10:15.806 }' 00:10:15.806 [2024-11-20 14:30:22.659188] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:15.806 [2024-11-20 14:30:22.659268] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:15.806 [2024-11-20 14:30:22.659671] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:15.806 [2024-11-20 14:30:22.659725] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:15.806 [2024-11-20 14:30:22.660508] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:15.806 [2024-11-20 14:30:22.660576] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:15.806 [2024-11-20 14:30:22.661180] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:15.806 [2024-11-20 14:30:22.661235] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:15.806 [2024-11-20 14:30:22.850330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.065 [2024-11-20 14:30:22.886277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.065 [2024-11-20 14:30:22.899952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.065 [2024-11-20 14:30:22.940404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.066 [2024-11-20 14:30:22.954280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.066 [2024-11-20 14:30:22.993532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:16.066 [2024-11-20 14:30:23.041985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.066 [2024-11-20 14:30:23.080786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.325 Running I/O for 1 seconds... 00:10:16.325 Running I/O for 1 seconds... 00:10:16.325 Running I/O for 1 seconds... 00:10:16.325 Running I/O for 1 seconds... 00:10:17.263 182488.00 IOPS, 712.84 MiB/s 00:10:17.263 Latency(us) 00:10:17.263 [2024-11-20T13:30:24.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.263 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:17.263 Nvme1n1 : 1.00 182094.66 711.31 0.00 0.00 698.85 298.67 2129.92 00:10:17.263 [2024-11-20T13:30:24.323Z] =================================================================================================================== 00:10:17.263 [2024-11-20T13:30:24.323Z] Total : 182094.66 711.31 0.00 0.00 698.85 298.67 2129.92 00:10:17.522 16864.00 IOPS, 65.88 MiB/s 00:10:17.522 Latency(us) 00:10:17.522 [2024-11-20T13:30:24.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.522 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:17.522 Nvme1n1 : 1.01 16910.56 66.06 0.00 0.00 7549.44 3440.64 13489.49 00:10:17.522 [2024-11-20T13:30:24.582Z] =================================================================================================================== 00:10:17.522 [2024-11-20T13:30:24.582Z] Total : 16910.56 66.06 0.00 0.00 7549.44 3440.64 13489.49 00:10:17.522 16677.00 IOPS, 65.14 MiB/s 00:10:17.522 Latency(us) 00:10:17.522 [2024-11-20T13:30:24.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.522 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:17.522 Nvme1n1 : 1.01 16748.23 65.42 0.00 0.00 7620.70 3440.64 17803.95 00:10:17.522 [2024-11-20T13:30:24.582Z] =================================================================================================================== 00:10:17.522 [2024-11-20T13:30:24.582Z] Total : 16748.23 65.42 0.00 0.00 7620.70 3440.64 17803.95 00:10:17.522 12591.00 IOPS, 49.18 MiB/s 00:10:17.522 Latency(us) 00:10:17.522 [2024-11-20T13:30:24.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.522 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:17.522 Nvme1n1 : 1.01 12668.68 49.49 0.00 0.00 10075.76 3850.24 21845.33 00:10:17.522 [2024-11-20T13:30:24.582Z] =================================================================================================================== 00:10:17.522 [2024-11-20T13:30:24.582Z] Total : 12668.68 49.49 0.00 0.00 10075.76 3850.24 21845.33 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3726697 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3726700 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3726702 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:17.522 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.523 rmmod nvme_tcp 00:10:17.523 rmmod nvme_fabrics 00:10:17.523 rmmod nvme_keyring 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3726396 ']' 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3726396 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3726396 ']' 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3726396 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3726396 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3726396' 00:10:17.523 killing process with pid 3726396 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3726396 00:10:17.523 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3726396 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.782 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.687 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.687 00:10:19.687 real 0m10.813s 00:10:19.687 user 0m18.282s 00:10:19.687 sys 0m5.782s 00:10:19.687 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.687 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:19.687 ************************************ 00:10:19.687 END TEST nvmf_bdev_io_wait 00:10:19.687 ************************************ 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.947 ************************************ 00:10:19.947 START TEST nvmf_queue_depth 00:10:19.947 ************************************ 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:19.947 * Looking for test storage... 00:10:19.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.947 --rc genhtml_branch_coverage=1 00:10:19.947 --rc genhtml_function_coverage=1 00:10:19.947 --rc genhtml_legend=1 00:10:19.947 --rc geninfo_all_blocks=1 00:10:19.947 --rc geninfo_unexecuted_blocks=1 00:10:19.947 00:10:19.947 ' 00:10:19.947 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.947 --rc genhtml_branch_coverage=1 00:10:19.947 --rc genhtml_function_coverage=1 00:10:19.948 --rc genhtml_legend=1 00:10:19.948 --rc geninfo_all_blocks=1 00:10:19.948 --rc geninfo_unexecuted_blocks=1 00:10:19.948 00:10:19.948 ' 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.948 --rc genhtml_branch_coverage=1 00:10:19.948 --rc genhtml_function_coverage=1 00:10:19.948 --rc genhtml_legend=1 00:10:19.948 --rc geninfo_all_blocks=1 00:10:19.948 --rc geninfo_unexecuted_blocks=1 00:10:19.948 00:10:19.948 ' 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.948 --rc genhtml_branch_coverage=1 00:10:19.948 --rc genhtml_function_coverage=1 00:10:19.948 --rc genhtml_legend=1 00:10:19.948 --rc geninfo_all_blocks=1 00:10:19.948 --rc geninfo_unexecuted_blocks=1 00:10:19.948 00:10:19.948 ' 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.948 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:25.317 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:25.318 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:25.318 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:25.318 Found net devices under 0000:31:00.0: cvl_0_0 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:25.318 Found net devices under 0000:31:00.1: cvl_0_1 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.318 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:25.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:10:25.578 00:10:25.578 --- 10.0.0.2 ping statistics --- 00:10:25.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.578 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:10:25.578 00:10:25.578 --- 10.0.0.1 ping statistics --- 00:10:25.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.578 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3731474 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3731474 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3731474 ']' 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.578 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.837 [2024-11-20 14:30:32.644980] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:25.837 [2024-11-20 14:30:32.645048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.838 [2024-11-20 14:30:32.719426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.838 [2024-11-20 14:30:32.748422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.838 [2024-11-20 14:30:32.748448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.838 [2024-11-20 14:30:32.748454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.838 [2024-11-20 14:30:32.748459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.838 [2024-11-20 14:30:32.748463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.838 [2024-11-20 14:30:32.748901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 [2024-11-20 14:30:32.847580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 Malloc0 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 [2024-11-20 14:30:32.885724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3731497 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3731497 /var/tmp/bdevperf.sock 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3731497 ']' 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:25.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:26.097 [2024-11-20 14:30:32.923283] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:26.097 [2024-11-20 14:30:32.923332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731497 ] 00:10:26.097 [2024-11-20 14:30:33.000286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.097 [2024-11-20 14:30:33.036581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.665 14:30:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.665 14:30:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:26.665 14:30:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:26.665 14:30:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.665 14:30:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:26.924 NVMe0n1 00:10:26.924 14:30:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.924 14:30:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:26.924 Running I/O for 10 seconds... 00:10:28.798 11264.00 IOPS, 44.00 MiB/s [2024-11-20T13:30:37.235Z] 11771.50 IOPS, 45.98 MiB/s [2024-11-20T13:30:38.175Z] 12281.67 IOPS, 47.98 MiB/s [2024-11-20T13:30:39.111Z] 12546.75 IOPS, 49.01 MiB/s [2024-11-20T13:30:40.052Z] 12730.00 IOPS, 49.73 MiB/s [2024-11-20T13:30:40.990Z] 12898.00 IOPS, 50.38 MiB/s [2024-11-20T13:30:41.927Z] 13001.29 IOPS, 50.79 MiB/s [2024-11-20T13:30:43.306Z] 13049.62 IOPS, 50.98 MiB/s [2024-11-20T13:30:43.875Z] 13149.11 IOPS, 51.36 MiB/s [2024-11-20T13:30:44.135Z] 13201.20 IOPS, 51.57 MiB/s 00:10:37.075 Latency(us) 00:10:37.075 [2024-11-20T13:30:44.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.075 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:37.076 Verification LBA range: start 0x0 length 0x4000 00:10:37.076 NVMe0n1 : 10.06 13230.54 51.68 0.00 0.00 77153.75 24139.09 55487.15 00:10:37.076 [2024-11-20T13:30:44.136Z] =================================================================================================================== 00:10:37.076 [2024-11-20T13:30:44.136Z] Total : 13230.54 51.68 0.00 0.00 77153.75 24139.09 55487.15 00:10:37.076 { 00:10:37.076 "results": [ 00:10:37.076 { 00:10:37.076 "job": "NVMe0n1", 00:10:37.076 "core_mask": "0x1", 00:10:37.076 "workload": "verify", 00:10:37.076 "status": "finished", 00:10:37.076 "verify_range": { 00:10:37.076 "start": 0, 00:10:37.076 "length": 16384 00:10:37.076 }, 00:10:37.076 "queue_depth": 1024, 00:10:37.076 "io_size": 4096, 00:10:37.076 "runtime": 10.055217, 00:10:37.076 "iops": 13230.544900224431, 00:10:37.076 "mibps": 51.681816016501685, 00:10:37.076 "io_failed": 0, 00:10:37.076 "io_timeout": 0, 00:10:37.076 "avg_latency_us": 77153.75265722562, 00:10:37.076 "min_latency_us": 24139.093333333334, 00:10:37.076 "max_latency_us": 55487.14666666667 00:10:37.076 } 00:10:37.076 ], 00:10:37.076 "core_count": 1 00:10:37.076 } 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3731497 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3731497 ']' 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3731497 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3731497 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3731497' 00:10:37.076 killing process with pid 3731497 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3731497 00:10:37.076 Received shutdown signal, test time was about 10.000000 seconds 00:10:37.076 00:10:37.076 Latency(us) 00:10:37.076 [2024-11-20T13:30:44.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.076 [2024-11-20T13:30:44.136Z] =================================================================================================================== 00:10:37.076 [2024-11-20T13:30:44.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:37.076 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3731497 00:10:37.076 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:37.076 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:37.076 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.076 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:37.076 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.076 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:37.076 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.076 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.076 rmmod nvme_tcp 00:10:37.076 rmmod nvme_fabrics 00:10:37.335 rmmod nvme_keyring 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3731474 ']' 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3731474 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3731474 ']' 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3731474 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3731474 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3731474' 00:10:37.335 killing process with pid 3731474 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3731474 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3731474 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.335 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.336 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.874 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.874 00:10:39.874 real 0m19.594s 00:10:39.874 user 0m23.970s 00:10:39.874 sys 0m5.338s 00:10:39.874 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.874 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:39.874 ************************************ 00:10:39.874 END TEST nvmf_queue_depth 00:10:39.874 ************************************ 00:10:39.874 14:30:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:39.874 14:30:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:39.875 ************************************ 00:10:39.875 START TEST nvmf_target_multipath 00:10:39.875 ************************************ 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:39.875 * Looking for test storage... 00:10:39.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:39.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.875 --rc genhtml_branch_coverage=1 00:10:39.875 --rc genhtml_function_coverage=1 00:10:39.875 --rc genhtml_legend=1 00:10:39.875 --rc geninfo_all_blocks=1 00:10:39.875 --rc geninfo_unexecuted_blocks=1 00:10:39.875 00:10:39.875 ' 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:39.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.875 --rc genhtml_branch_coverage=1 00:10:39.875 --rc genhtml_function_coverage=1 00:10:39.875 --rc genhtml_legend=1 00:10:39.875 --rc geninfo_all_blocks=1 00:10:39.875 --rc geninfo_unexecuted_blocks=1 00:10:39.875 00:10:39.875 ' 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:39.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.875 --rc genhtml_branch_coverage=1 00:10:39.875 --rc genhtml_function_coverage=1 00:10:39.875 --rc genhtml_legend=1 00:10:39.875 --rc geninfo_all_blocks=1 00:10:39.875 --rc geninfo_unexecuted_blocks=1 00:10:39.875 00:10:39.875 ' 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:39.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.875 --rc genhtml_branch_coverage=1 00:10:39.875 --rc genhtml_function_coverage=1 00:10:39.875 --rc genhtml_legend=1 00:10:39.875 --rc geninfo_all_blocks=1 00:10:39.875 --rc geninfo_unexecuted_blocks=1 00:10:39.875 00:10:39.875 ' 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:39.875 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:39.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:39.876 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.149 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:45.150 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:45.150 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:45.150 Found net devices under 0000:31:00.0: cvl_0_0 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:45.150 Found net devices under 0000:31:00.1: cvl_0_1 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.150 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.150 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:10:45.150 00:10:45.150 --- 10.0.0.2 ping statistics --- 00:10:45.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.150 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:10:45.150 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:10:45.150 00:10:45.150 --- 10.0.0.1 ping statistics --- 00:10:45.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.150 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:45.151 only one NIC for nvmf test 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.151 rmmod nvme_tcp 00:10:45.151 rmmod nvme_fabrics 00:10:45.151 rmmod nvme_keyring 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.151 14:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.693 00:10:47.693 real 0m7.748s 00:10:47.693 user 0m1.413s 00:10:47.693 sys 0m4.210s 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:47.693 ************************************ 00:10:47.693 END TEST nvmf_target_multipath 00:10:47.693 ************************************ 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.693 ************************************ 00:10:47.693 START TEST nvmf_zcopy 00:10:47.693 ************************************ 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:47.693 * Looking for test storage... 00:10:47.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.693 --rc genhtml_branch_coverage=1 00:10:47.693 --rc genhtml_function_coverage=1 00:10:47.693 --rc genhtml_legend=1 00:10:47.693 --rc geninfo_all_blocks=1 00:10:47.693 --rc geninfo_unexecuted_blocks=1 00:10:47.693 00:10:47.693 ' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.693 --rc genhtml_branch_coverage=1 00:10:47.693 --rc genhtml_function_coverage=1 00:10:47.693 --rc genhtml_legend=1 00:10:47.693 --rc geninfo_all_blocks=1 00:10:47.693 --rc geninfo_unexecuted_blocks=1 00:10:47.693 00:10:47.693 ' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.693 --rc genhtml_branch_coverage=1 00:10:47.693 --rc genhtml_function_coverage=1 00:10:47.693 --rc genhtml_legend=1 00:10:47.693 --rc geninfo_all_blocks=1 00:10:47.693 --rc geninfo_unexecuted_blocks=1 00:10:47.693 00:10:47.693 ' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.693 --rc genhtml_branch_coverage=1 00:10:47.693 --rc genhtml_function_coverage=1 00:10:47.693 --rc genhtml_legend=1 00:10:47.693 --rc geninfo_all_blocks=1 00:10:47.693 --rc geninfo_unexecuted_blocks=1 00:10:47.693 00:10:47.693 ' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:47.693 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.694 14:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:52.968 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:52.968 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:52.968 Found net devices under 0000:31:00.0: cvl_0_0 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:52.968 Found net devices under 0000:31:00.1: cvl_0_1 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.968 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:10:53.227 00:10:53.227 --- 10.0.0.2 ping statistics --- 00:10:53.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.227 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:10:53.227 00:10:53.227 --- 10.0.0.1 ping statistics --- 00:10:53.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.227 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3742859 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3742859 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3742859 ']' 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.227 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:53.227 [2024-11-20 14:31:00.124955] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:53.227 [2024-11-20 14:31:00.125022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.227 [2024-11-20 14:31:00.216833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.227 [2024-11-20 14:31:00.266269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.227 [2024-11-20 14:31:00.266326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.227 [2024-11-20 14:31:00.266335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.227 [2024-11-20 14:31:00.266342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.227 [2024-11-20 14:31:00.266348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.227 [2024-11-20 14:31:00.267207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.164 [2024-11-20 14:31:00.950036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.164 [2024-11-20 14:31:00.966200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.164 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.165 malloc0 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:54.165 { 00:10:54.165 "params": { 00:10:54.165 "name": "Nvme$subsystem", 00:10:54.165 "trtype": "$TEST_TRANSPORT", 00:10:54.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.165 "adrfam": "ipv4", 00:10:54.165 "trsvcid": "$NVMF_PORT", 00:10:54.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.165 "hdgst": ${hdgst:-false}, 00:10:54.165 "ddgst": ${ddgst:-false} 00:10:54.165 }, 00:10:54.165 "method": "bdev_nvme_attach_controller" 00:10:54.165 } 00:10:54.165 EOF 00:10:54.165 )") 00:10:54.165 14:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:54.165 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:54.165 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:54.165 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:54.165 "params": { 00:10:54.165 "name": "Nvme1", 00:10:54.165 "trtype": "tcp", 00:10:54.165 "traddr": "10.0.0.2", 00:10:54.165 "adrfam": "ipv4", 00:10:54.165 "trsvcid": "4420", 00:10:54.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.165 "hdgst": false, 00:10:54.165 "ddgst": false 00:10:54.165 }, 00:10:54.165 "method": "bdev_nvme_attach_controller" 00:10:54.165 }' 00:10:54.165 [2024-11-20 14:31:01.030673] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:10:54.165 [2024-11-20 14:31:01.030722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743080 ] 00:10:54.165 [2024-11-20 14:31:01.107714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.165 [2024-11-20 14:31:01.144023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.424 Running I/O for 10 seconds... 00:10:56.297 9821.00 IOPS, 76.73 MiB/s [2024-11-20T13:31:04.735Z] 9894.00 IOPS, 77.30 MiB/s [2024-11-20T13:31:05.673Z] 9924.00 IOPS, 77.53 MiB/s [2024-11-20T13:31:06.611Z] 9935.00 IOPS, 77.62 MiB/s [2024-11-20T13:31:07.548Z] 9948.80 IOPS, 77.72 MiB/s [2024-11-20T13:31:08.485Z] 9952.83 IOPS, 77.76 MiB/s [2024-11-20T13:31:09.423Z] 9963.71 IOPS, 77.84 MiB/s [2024-11-20T13:31:10.800Z] 9964.75 IOPS, 77.85 MiB/s [2024-11-20T13:31:11.368Z] 9965.56 IOPS, 77.86 MiB/s [2024-11-20T13:31:11.628Z] 9966.70 IOPS, 77.86 MiB/s 00:11:04.568 Latency(us) 00:11:04.568 [2024-11-20T13:31:11.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.568 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:04.568 Verification LBA range: start 0x0 length 0x1000 00:11:04.568 Nvme1n1 : 10.01 9969.15 77.88 0.00 0.00 12798.02 2266.45 22063.79 00:11:04.568 [2024-11-20T13:31:11.628Z] =================================================================================================================== 00:11:04.568 [2024-11-20T13:31:11.628Z] Total : 9969.15 77.88 0.00 0.00 12798.02 2266.45 22063.79 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3745228 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:04.568 { 00:11:04.568 "params": { 00:11:04.568 "name": "Nvme$subsystem", 00:11:04.568 "trtype": "$TEST_TRANSPORT", 00:11:04.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.568 "adrfam": "ipv4", 00:11:04.568 "trsvcid": "$NVMF_PORT", 00:11:04.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.568 "hdgst": ${hdgst:-false}, 00:11:04.568 "ddgst": ${ddgst:-false} 00:11:04.568 }, 00:11:04.568 "method": "bdev_nvme_attach_controller" 00:11:04.568 } 00:11:04.568 EOF 00:11:04.568 )") 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:04.568 [2024-11-20 14:31:11.476613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.476644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:04.568 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:04.568 "params": { 00:11:04.568 "name": "Nvme1", 00:11:04.568 "trtype": "tcp", 00:11:04.568 "traddr": "10.0.0.2", 00:11:04.568 "adrfam": "ipv4", 00:11:04.568 "trsvcid": "4420", 00:11:04.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.568 "hdgst": false, 00:11:04.568 "ddgst": false 00:11:04.568 }, 00:11:04.568 "method": "bdev_nvme_attach_controller" 00:11:04.568 }' 00:11:04.568 [2024-11-20 14:31:11.484592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.484601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.492610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.492618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.500630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.500639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.502262] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:11:04.568 [2024-11-20 14:31:11.502309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745228 ] 00:11:04.568 [2024-11-20 14:31:11.508649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.508657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.520679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.520687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.528699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.528707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.536720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.536728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.544741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.544749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.552762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.552769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.560783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.560790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.567748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.568 [2024-11-20 14:31:11.568803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.568810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.576822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.576830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.584842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.584850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.592863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.592871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.597775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.568 [2024-11-20 14:31:11.600884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.600891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.608907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.608916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.616929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.616940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.568 [2024-11-20 14:31:11.624948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.568 [2024-11-20 14:31:11.624959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.827 [2024-11-20 14:31:11.632967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.827 [2024-11-20 14:31:11.632978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.827 [2024-11-20 14:31:11.640988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.827 [2024-11-20 14:31:11.640996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.827 [2024-11-20 14:31:11.649008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.827 [2024-11-20 14:31:11.649016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.827 [2024-11-20 14:31:11.657028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.827 [2024-11-20 14:31:11.657036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.827 [2024-11-20 14:31:11.665060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.827 [2024-11-20 14:31:11.665075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.827 [2024-11-20 14:31:11.673075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.827 [2024-11-20 14:31:11.673085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.827 [2024-11-20 14:31:11.681093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.827 [2024-11-20 14:31:11.681102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.827 [2024-11-20 14:31:11.689115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.827 [2024-11-20 14:31:11.689124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.827 [2024-11-20 14:31:11.697139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.697151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.705158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.705169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.713174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.713182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.721194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.721201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.729214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.729221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.737235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.737242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.745262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.745270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.753282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.753291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.761301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.761308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.769321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.769329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.777343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.777349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.785364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.785370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.793386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.793394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.801405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.801412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.809426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.809433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.817447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.817454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.825467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.825473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.833487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.833495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.841508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.841515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.849537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.849551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 Running I/O for 5 seconds... 00:11:04.828 [2024-11-20 14:31:11.857552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.857561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.867834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.867850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.876812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.876827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.828 [2024-11-20 14:31:11.885460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.828 [2024-11-20 14:31:11.885476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.894275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.894292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.903282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.903299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.912241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.912264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.920569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.920585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.929463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.929478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.938239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.938260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.946612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.946627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.955694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.955709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.964789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.964804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.973783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.973797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.983141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.983157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:11.992167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:11.992181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.001270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.001285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.010223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.010238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.019363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.019377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.027872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.027886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.037140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.037154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.046173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.046188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.054792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.054806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.063931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.063945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.072969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.072983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.081909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.081927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.090770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.088 [2024-11-20 14:31:12.090785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.088 [2024-11-20 14:31:12.099304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.089 [2024-11-20 14:31:12.099318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.089 [2024-11-20 14:31:12.108218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.089 [2024-11-20 14:31:12.108232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.089 [2024-11-20 14:31:12.117210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.089 [2024-11-20 14:31:12.117224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.089 [2024-11-20 14:31:12.126155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.089 [2024-11-20 14:31:12.126169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.089 [2024-11-20 14:31:12.134994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.089 [2024-11-20 14:31:12.135008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.089 [2024-11-20 14:31:12.143615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.089 [2024-11-20 14:31:12.143629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.152782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.152797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.161740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.161754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.170895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.170910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.179459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.179474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.188160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.188175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.197260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.197274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.206027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.206041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.214537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.214551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.223284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.223298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.232141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.232156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.241146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.241160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.250204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.250222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.259182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.259196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.267963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.267977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.276541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.276556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.285179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.285193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.293967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.293982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.302640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.302655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.311772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.311786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.320593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.320608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.329574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.329588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.338601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.338615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.347294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.347309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.356078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.356092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.365199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.365213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.373579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.373593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.382424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.382437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.391830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.391844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.349 [2024-11-20 14:31:12.400416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.349 [2024-11-20 14:31:12.400430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.409310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.409325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.418295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.418316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.426956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.426970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.435909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.435924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.444622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.444636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.453196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.453210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.461901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.461915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.470935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.470950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.479929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.479943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.488710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.488725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.497433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.497448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.506200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.506215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.515054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.515068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.522803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.522817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.532214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.532228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.541252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.541266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.549760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.549774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.558461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.558475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.567399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.567413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.576538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.576552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.585532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.585546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.594084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.594099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.603346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.603360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.611777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.611791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.620687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.620701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.629680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.629694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.638127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.638141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.647025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.647039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.655904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.655918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.664805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.664820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.673166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.673180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.681576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.681590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.639 [2024-11-20 14:31:12.690327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.639 [2024-11-20 14:31:12.690342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.699291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.699306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.707852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.707866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.716482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.716497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.725393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.725408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.734352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.734366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.743278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.743292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.752150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.752164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.761180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.761195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.770240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.770259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.778627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.778641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.787858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.787872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.796153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.796168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.804813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.804827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.813797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.813812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.822354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.822368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.830964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.830978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.840170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.840185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.849279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.849293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.858270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.858284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 19416.00 IOPS, 151.69 MiB/s [2024-11-20T13:31:12.959Z] [2024-11-20 14:31:12.866771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.866785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.875500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.875515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.884152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.884166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.893153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.893167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.901981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.901995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.910775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.899 [2024-11-20 14:31:12.910793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.899 [2024-11-20 14:31:12.919597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.900 [2024-11-20 14:31:12.919611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.900 [2024-11-20 14:31:12.928596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.900 [2024-11-20 14:31:12.928611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.900 [2024-11-20 14:31:12.937002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.900 [2024-11-20 14:31:12.937016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.900 [2024-11-20 14:31:12.945900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.900 [2024-11-20 14:31:12.945915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:05.900 [2024-11-20 14:31:12.954830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:05.900 [2024-11-20 14:31:12.954845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:12.963537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:12.963551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:12.972066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:12.972080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:12.980767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:12.980780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:12.989324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:12.989338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:12.998190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:12.998205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:13.007118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:13.007133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:13.015617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:13.015631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:13.024367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:13.024382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:13.032680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:13.032695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:13.042042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:13.042057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:13.050941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:13.050956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:13.059748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:13.059762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.159 [2024-11-20 14:31:13.068608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.159 [2024-11-20 14:31:13.068623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.077801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.077820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.086637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.086651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.095330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.095344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.104142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.104157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.113280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.113294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.122114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.122129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.130816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.130830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.139890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.139905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.148952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.148967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.158168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.158183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.167198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.167213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.175778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.175793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.184778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.184792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.193666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.193681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.202460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.202474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.160 [2024-11-20 14:31:13.211672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.160 [2024-11-20 14:31:13.211686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.220443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.220458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.229357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.229371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.238103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.238117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.246892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.246910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.255876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.255891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.264662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.264677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.273726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.273741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.282804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.282819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.291669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.291683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.300558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.300573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.309475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.309489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.318501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.318515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.326946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.326960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.336051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.336066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.344529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.344543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.353621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.353635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.362764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.362779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.371782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.371796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.380933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.380948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.389877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.389892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.397734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.397748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.406610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.406625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.415308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.415326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.424622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.424636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.433321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.433336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.442052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.442067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.451313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.451327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.459807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.459822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.468426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.468440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.419 [2024-11-20 14:31:13.477173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.419 [2024-11-20 14:31:13.477187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.485948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.485963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.494845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.494860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.503889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.503903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.512477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.512491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.521153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.521167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.530124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.530138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.539227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.539241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.547834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.547847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.556812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.556826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.565873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.565888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.574880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.574894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.583587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.583604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.592785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.592799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.601195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.601209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.610085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.610099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.619009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.619024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.627972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.627986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.637094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.637108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.645486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.645500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.654767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.654781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.663304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.663318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.671958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.671973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.680682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.680696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.689446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.689460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.698324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.698339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.679 [2024-11-20 14:31:13.707016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.679 [2024-11-20 14:31:13.707030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.680 [2024-11-20 14:31:13.715965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.680 [2024-11-20 14:31:13.715979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.680 [2024-11-20 14:31:13.724716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.680 [2024-11-20 14:31:13.724730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.680 [2024-11-20 14:31:13.733708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.680 [2024-11-20 14:31:13.733723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.742420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.742435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.751311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.751325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.760414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.760428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.769118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.769132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.777797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.777812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.786518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.786532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.795184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.795197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.804339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.804353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.813338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.813353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.822717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.822732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.831075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.831089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.839902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.839917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.849034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.849048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.858132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.858147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 19528.00 IOPS, 152.56 MiB/s [2024-11-20T13:31:13.999Z] [2024-11-20 14:31:13.867075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.867090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.876179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.876193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.885231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.885248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.894229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.894243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.903321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.903335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.911856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.911871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.920976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.920991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.929968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.929983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.938992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.939006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.947862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.947876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.956854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.956869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.965414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.965428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.974322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.974336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.983203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.983217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.939 [2024-11-20 14:31:13.992128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.939 [2024-11-20 14:31:13.992142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.001108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.199 [2024-11-20 14:31:14.001122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.010072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.199 [2024-11-20 14:31:14.010087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.019208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.199 [2024-11-20 14:31:14.019222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.028161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.199 [2024-11-20 14:31:14.028175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.037171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.199 [2024-11-20 14:31:14.037185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.046156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.199 [2024-11-20 14:31:14.046170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.055047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.199 [2024-11-20 14:31:14.055061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.063954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.199 [2024-11-20 14:31:14.063968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.072806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.199 [2024-11-20 14:31:14.072821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.199 [2024-11-20 14:31:14.082076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.082093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.090470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.090485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.099363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.099378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.108373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.108387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.117503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.117517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.125881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.125895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.135030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.135044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.143929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.143943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.152408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.152423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.160982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.160996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.169944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.169959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.178778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.178792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.187380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.187394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.196266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.196280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.205269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.205283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.214192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.214206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.222819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.222832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.231760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.231774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.240815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.240829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.249719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.249736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.200 [2024-11-20 14:31:14.258547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.200 [2024-11-20 14:31:14.258561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.266800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.266814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.275788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.275802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.284317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.284332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.293675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.293689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.302204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.302218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.311293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.311307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.320533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.320547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.329718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.329732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.338530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.338544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.347490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.347504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.355729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.355744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.364827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.364841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.373637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.373651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.382874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.382888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.391500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.391514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.400287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.400301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.408949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.408963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.417649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.417666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.426703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.426717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.435428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.435442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.444370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.444385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.459 [2024-11-20 14:31:14.453836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.459 [2024-11-20 14:31:14.453851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.460 [2024-11-20 14:31:14.461627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.460 [2024-11-20 14:31:14.461642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.460 [2024-11-20 14:31:14.470569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.460 [2024-11-20 14:31:14.470583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.460 [2024-11-20 14:31:14.479432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.460 [2024-11-20 14:31:14.479446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.460 [2024-11-20 14:31:14.488136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.460 [2024-11-20 14:31:14.488150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.460 [2024-11-20 14:31:14.496865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.460 [2024-11-20 14:31:14.496880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.460 [2024-11-20 14:31:14.505481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.460 [2024-11-20 14:31:14.505495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.460 [2024-11-20 14:31:14.514627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.460 [2024-11-20 14:31:14.514642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.523429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.523444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.532489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.532504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.541450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.541465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.550291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.550305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.559142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.559157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.567857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.567872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.577298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.577313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.585959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.585977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.594714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.594728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.603923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.603938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.612531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.612546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.621200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.621215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.630264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.630280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.639324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.639339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.647718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.647732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.656567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.656582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.665250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.665265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.674307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.674321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.683404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.683418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.691711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.691726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.700702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.700717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.709741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.709756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.718592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.718606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.726842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.726856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.736220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.736235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.745173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.745188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.753716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.753731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.762963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.762978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.719 [2024-11-20 14:31:14.771996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.719 [2024-11-20 14:31:14.772011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.780453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.780468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.789152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.789166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.798026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.798041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.807206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.807221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.816237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.816257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.824885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.824900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.833682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.833697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.842555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.842569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.851425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.851439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.860346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.860360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 19574.00 IOPS, 152.92 MiB/s [2024-11-20T13:31:15.039Z] [2024-11-20 14:31:14.868893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.868908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.877476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.877490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.886519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.886534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.895132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.895147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.904132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.904147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.913120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.913135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.921536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.921551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.930841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.930857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.939752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.939767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.948670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.948685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.957296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.979 [2024-11-20 14:31:14.957310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.979 [2024-11-20 14:31:14.965988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.980 [2024-11-20 14:31:14.966002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.980 [2024-11-20 14:31:14.974728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.980 [2024-11-20 14:31:14.974743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.980 [2024-11-20 14:31:14.983720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.980 [2024-11-20 14:31:14.983734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.980 [2024-11-20 14:31:14.992573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.980 [2024-11-20 14:31:14.992587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.980 [2024-11-20 14:31:15.001301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.980 [2024-11-20 14:31:15.001315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.980 [2024-11-20 14:31:15.009984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.980 [2024-11-20 14:31:15.009999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.980 [2024-11-20 14:31:15.019102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.980 [2024-11-20 14:31:15.019117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.980 [2024-11-20 14:31:15.027638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.980 [2024-11-20 14:31:15.027653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.980 [2024-11-20 14:31:15.036659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.980 [2024-11-20 14:31:15.036674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.045794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.045809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.054655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.054670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.062888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.062902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.071493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.071507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.080250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.080264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.089185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.089199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.097879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.097893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.107003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.107017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.116526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.116540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.125281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.125294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.134156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.134170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.143091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.143104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.152348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.152362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.160841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.160855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.169628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.169642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.178484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.178498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.186868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.186882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.195410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.195424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.204598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.204612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.213642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.213656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.222162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.222176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.230818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.230832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.239 [2024-11-20 14:31:15.239531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.239 [2024-11-20 14:31:15.239545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.240 [2024-11-20 14:31:15.248587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.240 [2024-11-20 14:31:15.248604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.240 [2024-11-20 14:31:15.257498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.240 [2024-11-20 14:31:15.257512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.240 [2024-11-20 14:31:15.265794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.240 [2024-11-20 14:31:15.265808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.240 [2024-11-20 14:31:15.274400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.240 [2024-11-20 14:31:15.274414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.240 [2024-11-20 14:31:15.283512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.240 [2024-11-20 14:31:15.283526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.240 [2024-11-20 14:31:15.292589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.240 [2024-11-20 14:31:15.292603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.499 [2024-11-20 14:31:15.301680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.499 [2024-11-20 14:31:15.301695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.499 [2024-11-20 14:31:15.310434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.499 [2024-11-20 14:31:15.310448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.499 [2024-11-20 14:31:15.319644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.499 [2024-11-20 14:31:15.319658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.499 [2024-11-20 14:31:15.328889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.499 [2024-11-20 14:31:15.328904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.499 [2024-11-20 14:31:15.337372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.499 [2024-11-20 14:31:15.337387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.499 [2024-11-20 14:31:15.345974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.499 [2024-11-20 14:31:15.345988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.499 [2024-11-20 14:31:15.354343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.499 [2024-11-20 14:31:15.354357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.499 [2024-11-20 14:31:15.362971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.499 [2024-11-20 14:31:15.362986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.499 [2024-11-20 14:31:15.372005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.372020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.380440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.380454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.389276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.389291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.398184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.398199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.407125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.407139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.416046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.416064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.424736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.424751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.433455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.433469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.442345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.442359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.451275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.451289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.460084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.460098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.468949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.468964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.477793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.477808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.486950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.486964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.495347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.495361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.504125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.504139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.512787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.512801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.521490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.521505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.530533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.530547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.539369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.539383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.548710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.548724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.500 [2024-11-20 14:31:15.557308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.500 [2024-11-20 14:31:15.557322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.566514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.566529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.575577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.575591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.584098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.584115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.593131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.593146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.602394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.602408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.610786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.610800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.620020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.620035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.628542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.628556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.637903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.637917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.646891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.646906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.655791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.655805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.665047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.665061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.673997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.674011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.682757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.682771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.691775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.691789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.700555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.700570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.709659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.709674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.718793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.718808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.727628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.727642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.736644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.736659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.745742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.745757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.755019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.755037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.763606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.763620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.772634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.772648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.785758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.785773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.794097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.794111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.802670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.802685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.760 [2024-11-20 14:31:15.811935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.760 [2024-11-20 14:31:15.811950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.821055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.821070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.829986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.830001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.839081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.839096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.847871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.847885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.856739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.856754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.865653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.865667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 19595.75 IOPS, 153.09 MiB/s [2024-11-20T13:31:16.080Z] [2024-11-20 14:31:15.874484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.874498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.882916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.882931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.891866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.891881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.900782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.900796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.909455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.909469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.918618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.918632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.926934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.926948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.935868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.935883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.944547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.944562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.953406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.953421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.962175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.962191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.971423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.971437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.980387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.980401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.989232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.989251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:15.998171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:15.998185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:16.007373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:16.007387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:16.016299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:16.016314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:16.024712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:16.024727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:16.033345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:16.033362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:16.041783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:16.041798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:16.050480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:16.050495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:16.059171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:16.059186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:16.068200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:16.068215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.020 [2024-11-20 14:31:16.076982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.020 [2024-11-20 14:31:16.076996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.086027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.086042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.094998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.095013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.103568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.103583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.112055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.112070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.121189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.121204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.130157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.130172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.139181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.139196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.148193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.148208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.156926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.156942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.165540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.165554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.174241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.174261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.183066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.183082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.191465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.191480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.200452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.200467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.209253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.209268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.217687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.217702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.226336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.226351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.235323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.235337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.244361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.244375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.253345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.253360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.262201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.262215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.270427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.270442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.279508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.279522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.288413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.288428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.297282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.297297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.305842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.305857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.315083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.315098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.324118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.324133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.280 [2024-11-20 14:31:16.332334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.280 [2024-11-20 14:31:16.332350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.340886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.340901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.350224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.350239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.359371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.359386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.368433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.368447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.377346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.377360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.386250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.386264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.395441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.395455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.404478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.404492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.413227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.413241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.422160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.422178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.431187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.431201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.439414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.439428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.448227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.448242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.457203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.457218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.465606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.465620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.474505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.474519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.483466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.483481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.491859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.491874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.501106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.501121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.510137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.510152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.518668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.518682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.527726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.527741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.536347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.536361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.545300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.545314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.554013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.554027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.562853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.562868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.572005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.572019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.581232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.581251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.590217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.590234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.540 [2024-11-20 14:31:16.599209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.540 [2024-11-20 14:31:16.599223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.607654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.607668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.616442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.616456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.625373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.625387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.633838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.633852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.642460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.642475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.651267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.651281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.659767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.659781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.669037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.669052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.677616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.677629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.686369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.686382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.695698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.695712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.704850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.704864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.713862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.713877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.722443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.722457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.731588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.731603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.739972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.739987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.748996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.749011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.757814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.757832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.766591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.766605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.775608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.775622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.784027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.784041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.792860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.792874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.801722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.801736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.810341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.810355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.819202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.819216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.828210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.828225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.837103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.837118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.845952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.845966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.802 [2024-11-20 14:31:16.854983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.802 [2024-11-20 14:31:16.854998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.062 [2024-11-20 14:31:16.863701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.062 [2024-11-20 14:31:16.863716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.062 19620.40 IOPS, 153.28 MiB/s [2024-11-20T13:31:17.122Z] [2024-11-20 14:31:16.872238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.062 [2024-11-20 14:31:16.872257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.062 00:11:10.062 Latency(us) 00:11:10.062 [2024-11-20T13:31:17.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.062 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:10.062 Nvme1n1 : 5.01 19619.87 153.28 0.00 0.00 6518.33 2894.51 15619.41 00:11:10.062 [2024-11-20T13:31:17.122Z] =================================================================================================================== 00:11:10.062 [2024-11-20T13:31:17.122Z] Total : 19619.87 153.28 0.00 0.00 6518.33 2894.51 15619.41 00:11:10.062 [2024-11-20 14:31:16.878114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.062 [2024-11-20 14:31:16.878127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.062 [2024-11-20 14:31:16.886129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.062 [2024-11-20 14:31:16.886138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.062 [2024-11-20 14:31:16.894151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.062 [2024-11-20 14:31:16.894164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.062 [2024-11-20 14:31:16.902174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.062 [2024-11-20 14:31:16.902184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.062 [2024-11-20 14:31:16.910189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.062 [2024-11-20 14:31:16.910198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.062 [2024-11-20 14:31:16.918211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.062 [2024-11-20 14:31:16.918220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.062 [2024-11-20 14:31:16.926231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.062 [2024-11-20 14:31:16.926240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.063 [2024-11-20 14:31:16.934253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.063 [2024-11-20 14:31:16.934261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.063 [2024-11-20 14:31:16.942273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.063 [2024-11-20 14:31:16.942281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.063 [2024-11-20 14:31:16.950294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.063 [2024-11-20 14:31:16.950301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.063 [2024-11-20 14:31:16.958314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.063 [2024-11-20 14:31:16.958322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.063 [2024-11-20 14:31:16.966334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.063 [2024-11-20 14:31:16.966344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.063 [2024-11-20 14:31:16.974353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.063 [2024-11-20 14:31:16.974362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3745228) - No such process 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3745228 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:10.063 delay0 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.063 14:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:10.063 14:31:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.063 14:31:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:10.323 [2024-11-20 14:31:17.135420] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:18.561 Initializing NVMe Controllers 00:11:18.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:18.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:18.561 Initialization complete. Launching workers. 00:11:18.561 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 263, failed: 26206 00:11:18.561 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26369, failed to submit 100 00:11:18.561 success 26236, unsuccessful 133, failed 0 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.561 rmmod nvme_tcp 00:11:18.561 rmmod nvme_fabrics 00:11:18.561 rmmod nvme_keyring 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3742859 ']' 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3742859 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3742859 ']' 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3742859 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3742859 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3742859' 00:11:18.561 killing process with pid 3742859 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3742859 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3742859 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.561 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.562 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.952 00:11:19.952 real 0m32.390s 00:11:19.952 user 0m44.956s 00:11:19.952 sys 0m9.475s 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.952 ************************************ 00:11:19.952 END TEST nvmf_zcopy 00:11:19.952 ************************************ 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.952 ************************************ 00:11:19.952 START TEST nvmf_nmic 00:11:19.952 ************************************ 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:19.952 * Looking for test storage... 00:11:19.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:19.952 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.953 --rc genhtml_branch_coverage=1 00:11:19.953 --rc genhtml_function_coverage=1 00:11:19.953 --rc genhtml_legend=1 00:11:19.953 --rc geninfo_all_blocks=1 00:11:19.953 --rc geninfo_unexecuted_blocks=1 00:11:19.953 00:11:19.953 ' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.953 --rc genhtml_branch_coverage=1 00:11:19.953 --rc genhtml_function_coverage=1 00:11:19.953 --rc genhtml_legend=1 00:11:19.953 --rc geninfo_all_blocks=1 00:11:19.953 --rc geninfo_unexecuted_blocks=1 00:11:19.953 00:11:19.953 ' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.953 --rc genhtml_branch_coverage=1 00:11:19.953 --rc genhtml_function_coverage=1 00:11:19.953 --rc genhtml_legend=1 00:11:19.953 --rc geninfo_all_blocks=1 00:11:19.953 --rc geninfo_unexecuted_blocks=1 00:11:19.953 00:11:19.953 ' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.953 --rc genhtml_branch_coverage=1 00:11:19.953 --rc genhtml_function_coverage=1 00:11:19.953 --rc genhtml_legend=1 00:11:19.953 --rc geninfo_all_blocks=1 00:11:19.953 --rc geninfo_unexecuted_blocks=1 00:11:19.953 00:11:19.953 ' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.953 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.236 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:25.237 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:25.237 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:25.237 Found net devices under 0000:31:00.0: cvl_0_0 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:25.237 Found net devices under 0000:31:00.1: cvl_0_1 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.237 14:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:25.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:11:25.237 00:11:25.237 --- 10.0.0.2 ping statistics --- 00:11:25.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.237 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:11:25.237 00:11:25.237 --- 10.0.0.1 ping statistics --- 00:11:25.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.237 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3752582 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3752582 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3752582 ']' 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.237 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.237 [2024-11-20 14:31:32.294791] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:11:25.237 [2024-11-20 14:31:32.294840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.498 [2024-11-20 14:31:32.366584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.498 [2024-11-20 14:31:32.397240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.498 [2024-11-20 14:31:32.397276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.498 [2024-11-20 14:31:32.397282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.498 [2024-11-20 14:31:32.397287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.498 [2024-11-20 14:31:32.397292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.498 [2024-11-20 14:31:32.398555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.498 [2024-11-20 14:31:32.398706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.498 [2024-11-20 14:31:32.398837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.498 [2024-11-20 14:31:32.398838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.498 [2024-11-20 14:31:32.507161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.498 Malloc0 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.498 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.758 [2024-11-20 14:31:32.560249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:25.758 test case1: single bdev can't be used in multiple subsystems 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.758 [2024-11-20 14:31:32.584144] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:25.758 [2024-11-20 14:31:32.584160] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:25.758 [2024-11-20 14:31:32.584169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.758 request: 00:11:25.758 { 00:11:25.758 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:25.758 "namespace": { 00:11:25.758 "bdev_name": "Malloc0", 00:11:25.758 "no_auto_visible": false, 00:11:25.758 "hide_metadata": false 00:11:25.758 }, 00:11:25.758 "method": "nvmf_subsystem_add_ns", 00:11:25.758 "req_id": 1 00:11:25.758 } 00:11:25.758 Got JSON-RPC error response 00:11:25.758 response: 00:11:25.758 { 00:11:25.758 "code": -32602, 00:11:25.758 "message": "Invalid parameters" 00:11:25.758 } 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:25.758 Adding namespace failed - expected result. 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:25.758 test case2: host connect to nvmf target in multiple paths 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:25.758 [2024-11-20 14:31:32.592261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.758 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.138 14:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:28.519 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.520 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:28.520 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.520 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:28.520 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:31.059 14:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:31.059 14:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.060 14:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:31.060 14:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:31.060 14:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.060 14:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:31.060 14:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:31.060 [global] 00:11:31.060 thread=1 00:11:31.060 invalidate=1 00:11:31.060 rw=write 00:11:31.060 time_based=1 00:11:31.060 runtime=1 00:11:31.060 ioengine=libaio 00:11:31.060 direct=1 00:11:31.060 bs=4096 00:11:31.060 iodepth=1 00:11:31.060 norandommap=0 00:11:31.060 numjobs=1 00:11:31.060 00:11:31.060 verify_dump=1 00:11:31.060 verify_backlog=512 00:11:31.060 verify_state_save=0 00:11:31.060 do_verify=1 00:11:31.060 verify=crc32c-intel 00:11:31.060 [job0] 00:11:31.060 filename=/dev/nvme0n1 00:11:31.060 Could not set queue depth (nvme0n1) 00:11:31.060 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.060 fio-3.35 00:11:31.060 Starting 1 thread 00:11:31.997 00:11:31.997 job0: (groupid=0, jobs=1): err= 0: pid=3754116: Wed Nov 20 14:31:38 2024 00:11:31.997 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:31.997 slat (nsec): min=3926, max=43712, avg=17249.32, stdev=5922.36 00:11:31.998 clat (usec): min=761, max=1279, avg=992.06, stdev=72.97 00:11:31.998 lat (usec): min=786, max=1322, avg=1009.31, stdev=73.62 00:11:31.998 clat percentiles (usec): 00:11:31.998 | 1.00th=[ 824], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 938], 00:11:31.998 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:11:31.998 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:11:31.998 | 99.00th=[ 1172], 99.50th=[ 1172], 99.90th=[ 1287], 99.95th=[ 1287], 00:11:31.998 | 99.99th=[ 1287] 00:11:31.998 write: IOPS=869, BW=3477KiB/s (3560kB/s)(3480KiB/1001msec); 0 zone resets 00:11:31.998 slat (usec): min=4, max=27445, avg=49.89, stdev=929.93 00:11:31.998 clat (usec): min=246, max=800, avg=498.25, stdev=89.52 00:11:31.998 lat (usec): min=255, max=28153, avg=548.14, stdev=941.42 00:11:31.998 clat percentiles (usec): 00:11:31.998 | 1.00th=[ 281], 5.00th=[ 355], 10.00th=[ 375], 20.00th=[ 424], 00:11:31.998 | 30.00th=[ 449], 40.00th=[ 478], 50.00th=[ 502], 60.00th=[ 523], 00:11:31.998 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 635], 00:11:31.998 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 799], 99.95th=[ 799], 00:11:31.998 | 99.99th=[ 799] 00:11:31.998 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:31.998 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:31.998 lat (usec) : 250=0.14%, 500=31.40%, 750=31.33%, 1000=19.25% 00:11:31.998 lat (msec) : 2=17.87% 00:11:31.998 cpu : usr=0.80%, sys=2.90%, ctx=1387, majf=0, minf=1 00:11:31.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.998 issued rwts: total=512,870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.998 00:11:31.998 Run status group 0 (all jobs): 00:11:31.998 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:11:31.998 WRITE: bw=3477KiB/s (3560kB/s), 3477KiB/s-3477KiB/s (3560kB/s-3560kB/s), io=3480KiB (3564kB), run=1001-1001msec 00:11:31.998 00:11:31.998 Disk stats (read/write): 00:11:31.998 nvme0n1: ios=537/667, merge=0/0, ticks=1461/331, in_queue=1792, util=98.70% 00:11:31.998 14:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.277 rmmod nvme_tcp 00:11:32.277 rmmod nvme_fabrics 00:11:32.277 rmmod nvme_keyring 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3752582 ']' 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3752582 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3752582 ']' 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3752582 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3752582 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3752582' 00:11:32.277 killing process with pid 3752582 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3752582 00:11:32.277 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3752582 00:11:32.538 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.538 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.538 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.538 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:32.538 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:32.538 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.538 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.539 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.539 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.539 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.539 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.539 14:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.442 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.442 00:11:34.442 real 0m14.777s 00:11:34.442 user 0m41.938s 00:11:34.442 sys 0m4.734s 00:11:34.442 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.442 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:34.442 ************************************ 00:11:34.442 END TEST nvmf_nmic 00:11:34.442 ************************************ 00:11:34.442 14:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:34.442 14:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.442 14:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.442 14:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:34.442 ************************************ 00:11:34.442 START TEST nvmf_fio_target 00:11:34.442 ************************************ 00:11:34.442 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:34.702 * Looking for test storage... 00:11:34.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.702 --rc genhtml_branch_coverage=1 00:11:34.702 --rc genhtml_function_coverage=1 00:11:34.702 --rc genhtml_legend=1 00:11:34.702 --rc geninfo_all_blocks=1 00:11:34.702 --rc geninfo_unexecuted_blocks=1 00:11:34.702 00:11:34.702 ' 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.702 --rc genhtml_branch_coverage=1 00:11:34.702 --rc genhtml_function_coverage=1 00:11:34.702 --rc genhtml_legend=1 00:11:34.702 --rc geninfo_all_blocks=1 00:11:34.702 --rc geninfo_unexecuted_blocks=1 00:11:34.702 00:11:34.702 ' 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.702 --rc genhtml_branch_coverage=1 00:11:34.702 --rc genhtml_function_coverage=1 00:11:34.702 --rc genhtml_legend=1 00:11:34.702 --rc geninfo_all_blocks=1 00:11:34.702 --rc geninfo_unexecuted_blocks=1 00:11:34.702 00:11:34.702 ' 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.702 --rc genhtml_branch_coverage=1 00:11:34.702 --rc genhtml_function_coverage=1 00:11:34.702 --rc genhtml_legend=1 00:11:34.702 --rc geninfo_all_blocks=1 00:11:34.702 --rc geninfo_unexecuted_blocks=1 00:11:34.702 00:11:34.702 ' 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.702 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.703 14:31:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:39.976 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:39.976 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.976 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:39.977 Found net devices under 0000:31:00.0: cvl_0_0 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:39.977 Found net devices under 0000:31:00.1: cvl_0_1 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:11:39.977 00:11:39.977 --- 10.0.0.2 ping statistics --- 00:11:39.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.977 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:11:39.977 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:11:39.977 00:11:39.977 --- 10.0.0.1 ping statistics --- 00:11:39.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.977 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3758806 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3758806 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3758806 ']' 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.977 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.236 [2024-11-20 14:31:47.068390] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:11:40.236 [2024-11-20 14:31:47.068439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.236 [2024-11-20 14:31:47.152657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.236 [2024-11-20 14:31:47.188710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.236 [2024-11-20 14:31:47.188744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.236 [2024-11-20 14:31:47.188752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.236 [2024-11-20 14:31:47.188758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.236 [2024-11-20 14:31:47.188764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.236 [2024-11-20 14:31:47.190575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.236 [2024-11-20 14:31:47.190691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.236 [2024-11-20 14:31:47.190842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.236 [2024-11-20 14:31:47.190843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.804 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.804 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:40.804 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.804 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.804 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.063 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.063 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:41.063 [2024-11-20 14:31:48.005977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.063 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:41.323 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:41.323 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:41.323 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:41.323 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:41.582 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:41.582 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:41.841 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:41.841 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:41.841 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.100 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:42.100 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.357 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:42.357 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.357 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:42.357 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:42.616 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:42.874 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:42.874 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.874 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:42.874 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:43.133 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.133 [2024-11-20 14:31:50.140449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.133 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:43.392 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:43.650 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.026 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:45.026 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:45.026 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.026 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:45.026 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:45.026 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.926 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.926 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.926 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.926 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:46.926 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.926 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:46.926 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:46.926 [global] 00:11:46.926 thread=1 00:11:46.926 invalidate=1 00:11:46.926 rw=write 00:11:46.926 time_based=1 00:11:46.926 runtime=1 00:11:46.926 ioengine=libaio 00:11:46.926 direct=1 00:11:46.926 bs=4096 00:11:46.926 iodepth=1 00:11:46.926 norandommap=0 00:11:46.926 numjobs=1 00:11:46.926 00:11:46.926 verify_dump=1 00:11:46.926 verify_backlog=512 00:11:46.926 verify_state_save=0 00:11:46.926 do_verify=1 00:11:46.926 verify=crc32c-intel 00:11:46.926 [job0] 00:11:46.926 filename=/dev/nvme0n1 00:11:46.926 [job1] 00:11:46.926 filename=/dev/nvme0n2 00:11:46.926 [job2] 00:11:46.926 filename=/dev/nvme0n3 00:11:46.926 [job3] 00:11:46.926 filename=/dev/nvme0n4 00:11:47.183 Could not set queue depth (nvme0n1) 00:11:47.183 Could not set queue depth (nvme0n2) 00:11:47.183 Could not set queue depth (nvme0n3) 00:11:47.183 Could not set queue depth (nvme0n4) 00:11:47.442 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:47.442 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:47.442 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:47.442 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:47.442 fio-3.35 00:11:47.442 Starting 4 threads 00:11:48.820 00:11:48.820 job0: (groupid=0, jobs=1): err= 0: pid=3760625: Wed Nov 20 14:31:55 2024 00:11:48.820 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:48.820 slat (nsec): min=3015, max=45941, avg=19131.26, stdev=7533.82 00:11:48.820 clat (usec): min=589, max=1201, avg=911.96, stdev=88.87 00:11:48.820 lat (usec): min=601, max=1228, avg=931.09, stdev=91.93 00:11:48.820 clat percentiles (usec): 00:11:48.820 | 1.00th=[ 701], 5.00th=[ 766], 10.00th=[ 799], 20.00th=[ 848], 00:11:48.820 | 30.00th=[ 873], 40.00th=[ 889], 50.00th=[ 906], 60.00th=[ 930], 00:11:48.820 | 70.00th=[ 963], 80.00th=[ 988], 90.00th=[ 1020], 95.00th=[ 1057], 00:11:48.820 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1205], 00:11:48.820 | 99.99th=[ 1205] 00:11:48.820 write: IOPS=960, BW=3840KiB/s (3932kB/s)(3844KiB/1001msec); 0 zone resets 00:11:48.820 slat (nsec): min=3431, max=68826, avg=18139.96, stdev=9743.06 00:11:48.820 clat (usec): min=148, max=957, avg=519.32, stdev=129.33 00:11:48.820 lat (usec): min=163, max=973, avg=537.46, stdev=129.98 00:11:48.820 clat percentiles (usec): 00:11:48.820 | 1.00th=[ 235], 5.00th=[ 314], 10.00th=[ 351], 20.00th=[ 408], 00:11:48.820 | 30.00th=[ 449], 40.00th=[ 482], 50.00th=[ 523], 60.00th=[ 553], 00:11:48.820 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 685], 95.00th=[ 742], 00:11:48.820 | 99.00th=[ 832], 99.50th=[ 889], 99.90th=[ 955], 99.95th=[ 955], 00:11:48.820 | 99.99th=[ 955] 00:11:48.820 bw ( KiB/s): min= 4096, max= 4096, per=42.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.820 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.820 lat (usec) : 250=0.95%, 500=28.24%, 750=34.62%, 1000=30.96% 00:11:48.820 lat (msec) : 2=5.23% 00:11:48.820 cpu : usr=2.30%, sys=3.80%, ctx=1476, majf=0, minf=1 00:11:48.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.820 issued rwts: total=512,961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.820 job1: (groupid=0, jobs=1): err= 0: pid=3760640: Wed Nov 20 14:31:55 2024 00:11:48.820 read: IOPS=16, BW=66.8KiB/s (68.4kB/s)(68.0KiB/1018msec) 00:11:48.820 slat (nsec): min=11727, max=28046, avg=26790.71, stdev=3885.17 00:11:48.820 clat (usec): min=41886, max=42974, avg=42051.70, stdev=268.55 00:11:48.820 lat (usec): min=41914, max=43002, avg=42078.49, stdev=267.09 00:11:48.820 clat percentiles (usec): 00:11:48.820 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:48.820 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:48.820 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:11:48.820 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:48.820 | 99.99th=[42730] 00:11:48.820 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:11:48.820 slat (nsec): min=3553, max=43868, avg=14137.78, stdev=6639.65 00:11:48.820 clat (usec): min=213, max=815, avg=571.85, stdev=101.71 00:11:48.820 lat (usec): min=218, max=851, avg=585.99, stdev=104.43 00:11:48.820 clat percentiles (usec): 00:11:48.820 | 1.00th=[ 306], 5.00th=[ 412], 10.00th=[ 433], 20.00th=[ 486], 00:11:48.820 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 570], 60.00th=[ 603], 00:11:48.820 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 742], 00:11:48.820 | 99.00th=[ 791], 99.50th=[ 807], 99.90th=[ 816], 99.95th=[ 816], 00:11:48.820 | 99.99th=[ 816] 00:11:48.820 bw ( KiB/s): min= 4096, max= 4096, per=42.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.820 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.820 lat (usec) : 250=0.19%, 500=21.93%, 750=71.08%, 1000=3.59% 00:11:48.820 lat (msec) : 50=3.21% 00:11:48.820 cpu : usr=0.59%, sys=0.98%, ctx=530, majf=0, minf=1 00:11:48.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.820 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.821 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.821 job2: (groupid=0, jobs=1): err= 0: pid=3760665: Wed Nov 20 14:31:55 2024 00:11:48.821 read: IOPS=16, BW=65.8KiB/s (67.3kB/s)(68.0KiB/1034msec) 00:11:48.821 slat (nsec): min=12083, max=29187, avg=27634.94, stdev=4020.17 00:11:48.821 clat (usec): min=41810, max=42896, avg=42028.91, stdev=250.09 00:11:48.821 lat (usec): min=41838, max=42925, avg=42056.54, stdev=249.53 00:11:48.821 clat percentiles (usec): 00:11:48.821 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:48.821 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:48.821 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:11:48.821 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:48.821 | 99.99th=[42730] 00:11:48.821 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:11:48.821 slat (nsec): min=3777, max=42027, avg=15576.75, stdev=6111.94 00:11:48.821 clat (usec): min=223, max=944, avg=603.89, stdev=135.06 00:11:48.821 lat (usec): min=239, max=960, avg=619.47, stdev=136.51 00:11:48.821 clat percentiles (usec): 00:11:48.821 | 1.00th=[ 260], 5.00th=[ 371], 10.00th=[ 420], 20.00th=[ 494], 00:11:48.821 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:11:48.821 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 824], 00:11:48.821 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 947], 99.95th=[ 947], 00:11:48.821 | 99.99th=[ 947] 00:11:48.821 bw ( KiB/s): min= 4096, max= 4096, per=42.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.821 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.821 lat (usec) : 250=0.57%, 500=20.23%, 750=63.33%, 1000=12.67% 00:11:48.821 lat (msec) : 50=3.21% 00:11:48.821 cpu : usr=0.39%, sys=1.36%, ctx=530, majf=0, minf=1 00:11:48.821 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.821 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.821 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.821 job3: (groupid=0, jobs=1): err= 0: pid=3760675: Wed Nov 20 14:31:55 2024 00:11:48.821 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1025msec) 00:11:48.821 slat (nsec): min=10578, max=25540, avg=23437.94, stdev=4645.72 00:11:48.821 clat (usec): min=1055, max=42965, avg=39830.72, stdev=9682.40 00:11:48.821 lat (usec): min=1066, max=42990, avg=39854.16, stdev=9685.53 00:11:48.821 clat percentiles (usec): 00:11:48.821 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41681], 20.00th=[41681], 00:11:48.821 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:48.821 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:11:48.821 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:48.821 | 99.99th=[42730] 00:11:48.821 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:11:48.821 slat (nsec): min=3614, max=50839, avg=13424.04, stdev=7457.69 00:11:48.821 clat (usec): min=170, max=1049, avg=582.68, stdev=158.84 00:11:48.821 lat (usec): min=174, max=1063, avg=596.10, stdev=160.57 00:11:48.821 clat percentiles (usec): 00:11:48.821 | 1.00th=[ 239], 5.00th=[ 322], 10.00th=[ 379], 20.00th=[ 445], 00:11:48.821 | 30.00th=[ 486], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 635], 00:11:48.821 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 840], 00:11:48.821 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:11:48.821 | 99.99th=[ 1057] 00:11:48.821 bw ( KiB/s): min= 4096, max= 4096, per=42.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.821 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.821 lat (usec) : 250=1.89%, 500=29.43%, 750=50.57%, 1000=14.53% 00:11:48.821 lat (msec) : 2=0.38%, 50=3.21% 00:11:48.821 cpu : usr=0.29%, sys=0.78%, ctx=530, majf=0, minf=2 00:11:48.821 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.821 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.821 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.821 00:11:48.821 Run status group 0 (all jobs): 00:11:48.821 READ: bw=2182KiB/s (2234kB/s), 65.8KiB/s-2046KiB/s (67.3kB/s-2095kB/s), io=2256KiB (2310kB), run=1001-1034msec 00:11:48.821 WRITE: bw=9660KiB/s (9891kB/s), 1981KiB/s-3840KiB/s (2028kB/s-3932kB/s), io=9988KiB (10.2MB), run=1001-1034msec 00:11:48.821 00:11:48.821 Disk stats (read/write): 00:11:48.821 nvme0n1: ios=537/623, merge=0/0, ticks=1392/272, in_queue=1664, util=96.69% 00:11:48.821 nvme0n2: ios=35/512, merge=0/0, ticks=1468/242, in_queue=1710, util=97.14% 00:11:48.821 nvme0n3: ios=69/512, merge=0/0, ticks=1330/245, in_queue=1575, util=97.04% 00:11:48.821 nvme0n4: ios=13/512, merge=0/0, ticks=507/287, in_queue=794, util=89.40% 00:11:48.821 14:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:48.821 [global] 00:11:48.821 thread=1 00:11:48.821 invalidate=1 00:11:48.821 rw=randwrite 00:11:48.821 time_based=1 00:11:48.821 runtime=1 00:11:48.821 ioengine=libaio 00:11:48.821 direct=1 00:11:48.821 bs=4096 00:11:48.821 iodepth=1 00:11:48.821 norandommap=0 00:11:48.821 numjobs=1 00:11:48.821 00:11:48.821 verify_dump=1 00:11:48.821 verify_backlog=512 00:11:48.821 verify_state_save=0 00:11:48.821 do_verify=1 00:11:48.821 verify=crc32c-intel 00:11:48.821 [job0] 00:11:48.821 filename=/dev/nvme0n1 00:11:48.821 [job1] 00:11:48.821 filename=/dev/nvme0n2 00:11:48.821 [job2] 00:11:48.821 filename=/dev/nvme0n3 00:11:48.821 [job3] 00:11:48.821 filename=/dev/nvme0n4 00:11:48.821 Could not set queue depth (nvme0n1) 00:11:48.821 Could not set queue depth (nvme0n2) 00:11:48.821 Could not set queue depth (nvme0n3) 00:11:48.821 Could not set queue depth (nvme0n4) 00:11:48.821 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.821 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.821 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.821 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.821 fio-3.35 00:11:48.821 Starting 4 threads 00:11:50.203 00:11:50.203 job0: (groupid=0, jobs=1): err= 0: pid=3761164: Wed Nov 20 14:31:57 2024 00:11:50.203 read: IOPS=363, BW=1455KiB/s (1490kB/s)(1512KiB/1039msec) 00:11:50.203 slat (nsec): min=3025, max=30417, avg=11607.76, stdev=9262.84 00:11:50.203 clat (usec): min=213, max=42999, avg=2362.47, stdev=8384.22 00:11:50.203 lat (usec): min=221, max=43025, avg=2374.08, stdev=8387.18 00:11:50.203 clat percentiles (usec): 00:11:50.203 | 1.00th=[ 441], 5.00th=[ 478], 10.00th=[ 502], 20.00th=[ 529], 00:11:50.203 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:11:50.203 | 70.00th=[ 644], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 857], 00:11:50.203 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:11:50.203 | 99.99th=[43254] 00:11:50.203 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:11:50.203 slat (nsec): min=4184, max=23171, avg=5368.50, stdev=1232.79 00:11:50.203 clat (usec): min=116, max=437, avg=266.80, stdev=41.88 00:11:50.203 lat (usec): min=121, max=443, avg=272.17, stdev=42.04 00:11:50.203 clat percentiles (usec): 00:11:50.203 | 1.00th=[ 127], 5.00th=[ 212], 10.00th=[ 235], 20.00th=[ 245], 00:11:50.203 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:11:50.203 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 347], 00:11:50.203 | 99.00th=[ 429], 99.50th=[ 433], 99.90th=[ 437], 99.95th=[ 437], 00:11:50.203 | 99.99th=[ 437] 00:11:50.203 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.203 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.203 lat (usec) : 250=18.76%, 500=42.70%, 750=34.27%, 1000=2.47% 00:11:50.203 lat (msec) : 50=1.80% 00:11:50.203 cpu : usr=0.29%, sys=0.77%, ctx=893, majf=0, minf=1 00:11:50.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.204 issued rwts: total=378,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.204 job1: (groupid=0, jobs=1): err= 0: pid=3761178: Wed Nov 20 14:31:57 2024 00:11:50.204 read: IOPS=18, BW=73.6KiB/s (75.3kB/s)(76.0KiB/1033msec) 00:11:50.204 slat (nsec): min=24493, max=25077, avg=24779.00, stdev=178.39 00:11:50.204 clat (usec): min=1012, max=42003, avg=39208.90, stdev=9262.08 00:11:50.204 lat (usec): min=1037, max=42028, avg=39233.68, stdev=9262.10 00:11:50.204 clat percentiles (usec): 00:11:50.204 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41157], 20.00th=[41157], 00:11:50.204 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:50.204 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:50.204 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:50.204 | 99.99th=[42206] 00:11:50.204 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:11:50.204 slat (nsec): min=4012, max=43413, avg=13188.86, stdev=5210.61 00:11:50.204 clat (usec): min=220, max=985, avg=544.69, stdev=149.29 00:11:50.204 lat (usec): min=232, max=998, avg=557.88, stdev=150.42 00:11:50.204 clat percentiles (usec): 00:11:50.204 | 1.00th=[ 235], 5.00th=[ 302], 10.00th=[ 351], 20.00th=[ 424], 00:11:50.204 | 30.00th=[ 465], 40.00th=[ 494], 50.00th=[ 529], 60.00th=[ 586], 00:11:50.204 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 734], 95.00th=[ 799], 00:11:50.204 | 99.00th=[ 914], 99.50th=[ 971], 99.90th=[ 988], 99.95th=[ 988], 00:11:50.204 | 99.99th=[ 988] 00:11:50.204 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.204 lat (usec) : 250=1.69%, 500=38.98%, 750=47.65%, 1000=8.10% 00:11:50.204 lat (msec) : 2=0.19%, 50=3.39% 00:11:50.204 cpu : usr=0.29%, sys=0.58%, ctx=531, majf=0, minf=2 00:11:50.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.204 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.204 job2: (groupid=0, jobs=1): err= 0: pid=3761195: Wed Nov 20 14:31:57 2024 00:11:50.204 read: IOPS=843, BW=3373KiB/s (3454kB/s)(3376KiB/1001msec) 00:11:50.204 slat (nsec): min=3246, max=45517, avg=18758.31, stdev=8513.70 00:11:50.204 clat (usec): min=257, max=1198, avg=745.00, stdev=138.58 00:11:50.204 lat (usec): min=265, max=1213, avg=763.76, stdev=139.31 00:11:50.204 clat percentiles (usec): 00:11:50.204 | 1.00th=[ 392], 5.00th=[ 537], 10.00th=[ 586], 20.00th=[ 635], 00:11:50.204 | 30.00th=[ 668], 40.00th=[ 709], 50.00th=[ 750], 60.00th=[ 775], 00:11:50.204 | 70.00th=[ 807], 80.00th=[ 848], 90.00th=[ 930], 95.00th=[ 979], 00:11:50.204 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1205], 99.95th=[ 1205], 00:11:50.204 | 99.99th=[ 1205] 00:11:50.204 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:50.204 slat (nsec): min=4036, max=52392, avg=18027.12, stdev=10326.65 00:11:50.204 clat (usec): min=90, max=722, avg=319.26, stdev=93.62 00:11:50.204 lat (usec): min=95, max=747, avg=337.29, stdev=95.57 00:11:50.204 clat percentiles (usec): 00:11:50.204 | 1.00th=[ 155], 5.00th=[ 182], 10.00th=[ 210], 20.00th=[ 255], 00:11:50.204 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 314], 00:11:50.204 | 70.00th=[ 351], 80.00th=[ 392], 90.00th=[ 453], 95.00th=[ 498], 00:11:50.204 | 99.00th=[ 578], 99.50th=[ 627], 99.90th=[ 709], 99.95th=[ 725], 00:11:50.204 | 99.99th=[ 725] 00:11:50.204 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.204 lat (usec) : 100=0.16%, 250=9.80%, 500=43.79%, 750=23.45%, 1000=21.25% 00:11:50.204 lat (msec) : 2=1.55% 00:11:50.204 cpu : usr=1.20%, sys=4.00%, ctx=1869, majf=0, minf=1 00:11:50.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.204 issued rwts: total=844,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.204 job3: (groupid=0, jobs=1): err= 0: pid=3761201: Wed Nov 20 14:31:57 2024 00:11:50.204 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1007msec) 00:11:50.204 slat (nsec): min=11235, max=26004, avg=24850.41, stdev=3511.82 00:11:50.204 clat (usec): min=1066, max=42982, avg=39653.10, stdev=9953.53 00:11:50.204 lat (usec): min=1092, max=43007, avg=39677.95, stdev=9953.32 00:11:50.204 clat percentiles (usec): 00:11:50.204 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:11:50.204 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:50.204 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:11:50.204 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:50.204 | 99.99th=[42730] 00:11:50.204 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:11:50.204 slat (nsec): min=3911, max=62290, avg=15196.19, stdev=8118.37 00:11:50.204 clat (usec): min=238, max=947, avg=629.96, stdev=117.91 00:11:50.204 lat (usec): min=242, max=959, avg=645.15, stdev=119.77 00:11:50.204 clat percentiles (usec): 00:11:50.204 | 1.00th=[ 326], 5.00th=[ 429], 10.00th=[ 465], 20.00th=[ 537], 00:11:50.204 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 644], 60.00th=[ 676], 00:11:50.204 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 799], 00:11:50.204 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 947], 99.95th=[ 947], 00:11:50.204 | 99.99th=[ 947] 00:11:50.204 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:11:50.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:50.204 lat (usec) : 250=0.19%, 500=13.99%, 750=68.62%, 1000=13.99% 00:11:50.204 lat (msec) : 2=0.19%, 50=3.02% 00:11:50.204 cpu : usr=0.40%, sys=0.70%, ctx=529, majf=0, minf=1 00:11:50.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.204 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.204 00:11:50.204 Run status group 0 (all jobs): 00:11:50.204 READ: bw=4843KiB/s (4959kB/s), 67.5KiB/s-3373KiB/s (69.1kB/s-3454kB/s), io=5032KiB (5153kB), run=1001-1039msec 00:11:50.204 WRITE: bw=9856KiB/s (10.1MB/s), 1971KiB/s-4092KiB/s (2018kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1039msec 00:11:50.204 00:11:50.204 Disk stats (read/write): 00:11:50.204 nvme0n1: ios=399/512, merge=0/0, ticks=1614/132, in_queue=1746, util=94.19% 00:11:50.204 nvme0n2: ios=64/512, merge=0/0, ticks=639/271, in_queue=910, util=92.97% 00:11:50.204 nvme0n3: ios=660/1024, merge=0/0, ticks=1399/313, in_queue=1712, util=98.12% 00:11:50.204 nvme0n4: ios=70/512, merge=0/0, ticks=588/312, in_queue=900, util=93.58% 00:11:50.204 14:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:50.204 [global] 00:11:50.204 thread=1 00:11:50.204 invalidate=1 00:11:50.204 rw=write 00:11:50.204 time_based=1 00:11:50.204 runtime=1 00:11:50.204 ioengine=libaio 00:11:50.204 direct=1 00:11:50.204 bs=4096 00:11:50.204 iodepth=128 00:11:50.204 norandommap=0 00:11:50.204 numjobs=1 00:11:50.204 00:11:50.204 verify_dump=1 00:11:50.204 verify_backlog=512 00:11:50.204 verify_state_save=0 00:11:50.204 do_verify=1 00:11:50.204 verify=crc32c-intel 00:11:50.204 [job0] 00:11:50.204 filename=/dev/nvme0n1 00:11:50.204 [job1] 00:11:50.204 filename=/dev/nvme0n2 00:11:50.204 [job2] 00:11:50.204 filename=/dev/nvme0n3 00:11:50.204 [job3] 00:11:50.204 filename=/dev/nvme0n4 00:11:50.204 Could not set queue depth (nvme0n1) 00:11:50.204 Could not set queue depth (nvme0n2) 00:11:50.204 Could not set queue depth (nvme0n3) 00:11:50.204 Could not set queue depth (nvme0n4) 00:11:50.463 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:50.463 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:50.463 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:50.463 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:50.463 fio-3.35 00:11:50.463 Starting 4 threads 00:11:51.873 00:11:51.873 job0: (groupid=0, jobs=1): err= 0: pid=3761725: Wed Nov 20 14:31:58 2024 00:11:51.873 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:11:51.873 slat (nsec): min=956, max=13552k, avg=103547.31, stdev=702346.00 00:11:51.873 clat (usec): min=2186, max=46453, avg=14281.92, stdev=8853.96 00:11:51.873 lat (usec): min=2192, max=46483, avg=14385.47, stdev=8924.73 00:11:51.873 clat percentiles (usec): 00:11:51.873 | 1.00th=[ 2966], 5.00th=[ 4621], 10.00th=[ 5866], 20.00th=[ 7373], 00:11:51.873 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[11076], 60.00th=[12518], 00:11:51.873 | 70.00th=[17695], 80.00th=[20317], 90.00th=[27919], 95.00th=[32113], 00:11:51.873 | 99.00th=[38536], 99.50th=[40109], 99.90th=[45876], 99.95th=[45876], 00:11:51.873 | 99.99th=[46400] 00:11:51.873 write: IOPS=3752, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1004msec); 0 zone resets 00:11:51.873 slat (nsec): min=1729, max=9689.2k, avg=144707.04, stdev=731919.72 00:11:51.873 clat (usec): min=1216, max=76931, avg=20172.83, stdev=16332.94 00:11:51.873 lat (usec): min=2114, max=76940, avg=20317.54, stdev=16448.88 00:11:51.873 clat percentiles (usec): 00:11:51.873 | 1.00th=[ 3064], 5.00th=[ 5080], 10.00th=[ 6128], 20.00th=[ 6652], 00:11:51.873 | 30.00th=[ 7570], 40.00th=[11076], 50.00th=[13042], 60.00th=[18220], 00:11:51.873 | 70.00th=[27395], 80.00th=[35914], 90.00th=[45351], 95.00th=[53216], 00:11:51.873 | 99.00th=[67634], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:11:51.873 | 99.99th=[77071] 00:11:51.873 bw ( KiB/s): min=10672, max=18440, per=15.48%, avg=14556.00, stdev=5492.81, samples=2 00:11:51.873 iops : min= 2668, max= 4612, avg=3640.00, stdev=1374.62, samples=2 00:11:51.873 lat (msec) : 2=0.01%, 4=2.03%, 10=38.89%, 20=31.28%, 50=24.35% 00:11:51.873 lat (msec) : 100=3.44% 00:11:51.873 cpu : usr=1.69%, sys=3.89%, ctx=459, majf=0, minf=1 00:11:51.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:51.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.873 issued rwts: total=3584,3768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.873 job1: (groupid=0, jobs=1): err= 0: pid=3761738: Wed Nov 20 14:31:58 2024 00:11:51.873 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:11:51.873 slat (nsec): min=928, max=11478k, avg=72974.00, stdev=584496.05 00:11:51.873 clat (usec): min=1770, max=31516, avg=9599.54, stdev=3535.18 00:11:51.873 lat (usec): min=1787, max=31522, avg=9672.51, stdev=3590.63 00:11:51.873 clat percentiles (usec): 00:11:51.873 | 1.00th=[ 3949], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7373], 00:11:51.873 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:11:51.873 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[12780], 95.00th=[16057], 00:11:51.873 | 99.00th=[26084], 99.50th=[27657], 99.90th=[28967], 99.95th=[31589], 00:11:51.873 | 99.99th=[31589] 00:11:51.873 write: IOPS=6053, BW=23.6MiB/s (24.8MB/s)(23.8MiB/1008msec); 0 zone resets 00:11:51.873 slat (nsec): min=1646, max=39737k, avg=87327.73, stdev=865908.99 00:11:51.873 clat (usec): min=840, max=78192, avg=10998.85, stdev=9405.36 00:11:51.873 lat (usec): min=844, max=78200, avg=11086.18, stdev=9484.85 00:11:51.873 clat percentiles (usec): 00:11:51.873 | 1.00th=[ 1303], 5.00th=[ 3490], 10.00th=[ 4424], 20.00th=[ 5735], 00:11:51.873 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7373], 60.00th=[ 8979], 00:11:51.873 | 70.00th=[10683], 80.00th=[13566], 90.00th=[22414], 95.00th=[32375], 00:11:51.873 | 99.00th=[45876], 99.50th=[46400], 99.90th=[78119], 99.95th=[78119], 00:11:51.873 | 99.99th=[78119] 00:11:51.873 bw ( KiB/s): min=20480, max=27312, per=25.41%, avg=23896.00, stdev=4830.95, samples=2 00:11:51.873 iops : min= 5120, max= 6828, avg=5974.00, stdev=1207.74, samples=2 00:11:51.873 lat (usec) : 1000=0.18% 00:11:51.873 lat (msec) : 2=0.74%, 4=3.55%, 10=65.31%, 20=22.98%, 50=7.09% 00:11:51.873 lat (msec) : 100=0.14% 00:11:51.873 cpu : usr=3.48%, sys=4.77%, ctx=419, majf=0, minf=1 00:11:51.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:51.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.873 issued rwts: total=5632,6102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.873 job2: (groupid=0, jobs=1): err= 0: pid=3761760: Wed Nov 20 14:31:58 2024 00:11:51.873 read: IOPS=6572, BW=25.7MiB/s (26.9MB/s)(25.7MiB/1002msec) 00:11:51.873 slat (nsec): min=916, max=7374.7k, avg=68873.39, stdev=427320.52 00:11:51.873 clat (usec): min=1164, max=26711, avg=8577.42, stdev=2539.15 00:11:51.873 lat (usec): min=4394, max=26717, avg=8646.29, stdev=2567.78 00:11:51.873 clat percentiles (usec): 00:11:51.873 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6652], 20.00th=[ 7111], 00:11:51.873 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8455], 00:11:51.874 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10683], 95.00th=[11338], 00:11:51.874 | 99.00th=[22938], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:11:51.874 | 99.99th=[26608] 00:11:51.874 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:11:51.874 slat (nsec): min=1567, max=40014k, avg=78047.63, stdev=665512.36 00:11:51.874 clat (usec): min=4126, max=53137, avg=10576.88, stdev=6972.17 00:11:51.874 lat (usec): min=4133, max=53142, avg=10654.93, stdev=7009.30 00:11:51.874 clat percentiles (usec): 00:11:51.874 | 1.00th=[ 4555], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7046], 00:11:51.874 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8356], 00:11:51.874 | 70.00th=[ 8979], 80.00th=[11994], 90.00th=[19006], 95.00th=[24511], 00:11:51.874 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:11:51.874 | 99.99th=[53216] 00:11:51.874 bw ( KiB/s): min=24656, max=28592, per=28.32%, avg=26624.00, stdev=2783.17, samples=2 00:11:51.874 iops : min= 6164, max= 7148, avg=6656.00, stdev=695.79, samples=2 00:11:51.874 lat (msec) : 2=0.01%, 10=80.96%, 20=13.69%, 50=5.33%, 100=0.01% 00:11:51.874 cpu : usr=3.20%, sys=5.99%, ctx=830, majf=0, minf=2 00:11:51.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:51.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.874 issued rwts: total=6586,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.874 job3: (groupid=0, jobs=1): err= 0: pid=3761770: Wed Nov 20 14:31:58 2024 00:11:51.874 read: IOPS=6867, BW=26.8MiB/s (28.1MB/s)(27.0MiB/1006msec) 00:11:51.874 slat (nsec): min=986, max=15626k, avg=76588.04, stdev=597223.25 00:11:51.874 clat (usec): min=1457, max=34427, avg=9977.76, stdev=3918.13 00:11:51.874 lat (usec): min=3045, max=34457, avg=10054.35, stdev=3966.87 00:11:51.874 clat percentiles (usec): 00:11:51.874 | 1.00th=[ 5538], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7635], 00:11:51.874 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9503], 00:11:51.874 | 70.00th=[10290], 80.00th=[11338], 90.00th=[13960], 95.00th=[20055], 00:11:51.874 | 99.00th=[24511], 99.50th=[27132], 99.90th=[31327], 99.95th=[31327], 00:11:51.874 | 99.99th=[34341] 00:11:51.874 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:11:51.874 slat (nsec): min=1686, max=9942.0k, avg=58712.10, stdev=477506.73 00:11:51.874 clat (usec): min=728, max=33729, avg=8169.47, stdev=4117.34 00:11:51.874 lat (usec): min=736, max=33738, avg=8228.18, stdev=4144.23 00:11:51.874 clat percentiles (usec): 00:11:51.874 | 1.00th=[ 2409], 5.00th=[ 3884], 10.00th=[ 4621], 20.00th=[ 5473], 00:11:51.874 | 30.00th=[ 6521], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 7963], 00:11:51.874 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[13566], 00:11:51.874 | 99.00th=[28705], 99.50th=[30540], 99.90th=[33817], 99.95th=[33817], 00:11:51.874 | 99.99th=[33817] 00:11:51.874 bw ( KiB/s): min=23240, max=34104, per=30.49%, avg=28672.00, stdev=7682.01, samples=2 00:11:51.874 iops : min= 5810, max= 8526, avg=7168.00, stdev=1920.50, samples=2 00:11:51.874 lat (usec) : 750=0.02% 00:11:51.874 lat (msec) : 2=0.31%, 4=2.94%, 10=71.60%, 20=21.23%, 50=3.89% 00:11:51.874 cpu : usr=3.38%, sys=5.96%, ctx=357, majf=0, minf=1 00:11:51.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:51.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.874 issued rwts: total=6909,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.874 00:11:51.874 Run status group 0 (all jobs): 00:11:51.874 READ: bw=88.0MiB/s (92.3MB/s), 13.9MiB/s-26.8MiB/s (14.6MB/s-28.1MB/s), io=88.7MiB (93.0MB), run=1002-1008msec 00:11:51.874 WRITE: bw=91.8MiB/s (96.3MB/s), 14.7MiB/s-27.8MiB/s (15.4MB/s-29.2MB/s), io=92.6MiB (97.1MB), run=1002-1008msec 00:11:51.874 00:11:51.874 Disk stats (read/write): 00:11:51.874 nvme0n1: ios=3124/3231, merge=0/0, ticks=30363/48390, in_queue=78753, util=95.99% 00:11:51.874 nvme0n2: ios=4651/4840, merge=0/0, ticks=42953/45372, in_queue=88325, util=98.57% 00:11:51.874 nvme0n3: ios=5661/5695, merge=0/0, ticks=24041/27090, in_queue=51131, util=95.68% 00:11:51.874 nvme0n4: ios=5526/5632, merge=0/0, ticks=43711/35110, in_queue=78821, util=97.44% 00:11:51.874 14:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:51.874 [global] 00:11:51.874 thread=1 00:11:51.874 invalidate=1 00:11:51.874 rw=randwrite 00:11:51.874 time_based=1 00:11:51.874 runtime=1 00:11:51.874 ioengine=libaio 00:11:51.874 direct=1 00:11:51.874 bs=4096 00:11:51.874 iodepth=128 00:11:51.874 norandommap=0 00:11:51.874 numjobs=1 00:11:51.874 00:11:51.874 verify_dump=1 00:11:51.874 verify_backlog=512 00:11:51.874 verify_state_save=0 00:11:51.874 do_verify=1 00:11:51.874 verify=crc32c-intel 00:11:51.874 [job0] 00:11:51.874 filename=/dev/nvme0n1 00:11:51.874 [job1] 00:11:51.874 filename=/dev/nvme0n2 00:11:51.874 [job2] 00:11:51.874 filename=/dev/nvme0n3 00:11:51.874 [job3] 00:11:51.874 filename=/dev/nvme0n4 00:11:51.874 Could not set queue depth (nvme0n1) 00:11:51.874 Could not set queue depth (nvme0n2) 00:11:51.874 Could not set queue depth (nvme0n3) 00:11:51.874 Could not set queue depth (nvme0n4) 00:11:52.138 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.138 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.138 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.138 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:52.138 fio-3.35 00:11:52.138 Starting 4 threads 00:11:53.522 00:11:53.522 job0: (groupid=0, jobs=1): err= 0: pid=3762233: Wed Nov 20 14:32:00 2024 00:11:53.522 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:11:53.522 slat (nsec): min=916, max=12453k, avg=80069.33, stdev=535351.11 00:11:53.522 clat (usec): min=2541, max=26301, avg=10191.77, stdev=3027.65 00:11:53.522 lat (usec): min=2545, max=26312, avg=10271.84, stdev=3077.72 00:11:53.522 clat percentiles (usec): 00:11:53.522 | 1.00th=[ 4047], 5.00th=[ 5735], 10.00th=[ 6521], 20.00th=[ 7177], 00:11:53.522 | 30.00th=[ 8586], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[10945], 00:11:53.522 | 70.00th=[11469], 80.00th=[12649], 90.00th=[14091], 95.00th=[14746], 00:11:53.522 | 99.00th=[19530], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:11:53.522 | 99.99th=[26346] 00:11:53.522 write: IOPS=6265, BW=24.5MiB/s (25.7MB/s)(24.6MiB/1006msec); 0 zone resets 00:11:53.522 slat (nsec): min=1571, max=8387.7k, avg=72963.35, stdev=414750.44 00:11:53.522 clat (usec): min=543, max=30508, avg=10292.39, stdev=4449.61 00:11:53.522 lat (usec): min=547, max=30510, avg=10365.35, stdev=4480.60 00:11:53.522 clat percentiles (usec): 00:11:53.522 | 1.00th=[ 1991], 5.00th=[ 4228], 10.00th=[ 5407], 20.00th=[ 6652], 00:11:53.522 | 30.00th=[ 7177], 40.00th=[ 8356], 50.00th=[ 9110], 60.00th=[10421], 00:11:53.522 | 70.00th=[12649], 80.00th=[15008], 90.00th=[16909], 95.00th=[17957], 00:11:53.522 | 99.00th=[21103], 99.50th=[22152], 99.90th=[25297], 99.95th=[28443], 00:11:53.522 | 99.99th=[30540] 00:11:53.522 bw ( KiB/s): min=20736, max=28672, per=25.46%, avg=24704.00, stdev=5611.60, samples=2 00:11:53.522 iops : min= 5184, max= 7168, avg=6176.00, stdev=1402.90, samples=2 00:11:53.522 lat (usec) : 750=0.04%, 1000=0.06% 00:11:53.522 lat (msec) : 2=0.42%, 4=1.88%, 10=48.79%, 20=47.71%, 50=1.10% 00:11:53.522 cpu : usr=3.38%, sys=4.48%, ctx=613, majf=0, minf=1 00:11:53.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:53.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.522 issued rwts: total=6144,6303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.522 job1: (groupid=0, jobs=1): err= 0: pid=3762247: Wed Nov 20 14:32:00 2024 00:11:53.522 read: IOPS=6364, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1005msec) 00:11:53.522 slat (nsec): min=912, max=63344k, avg=81688.77, stdev=933279.14 00:11:53.522 clat (usec): min=1724, max=82439, avg=10204.80, stdev=9041.95 00:11:53.522 lat (usec): min=2951, max=82464, avg=10286.49, stdev=9095.57 00:11:53.522 clat percentiles (usec): 00:11:53.522 | 1.00th=[ 3654], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 6652], 00:11:53.522 | 30.00th=[ 6915], 40.00th=[ 7308], 50.00th=[ 8356], 60.00th=[ 9503], 00:11:53.522 | 70.00th=[10028], 80.00th=[11863], 90.00th=[13435], 95.00th=[15926], 00:11:53.522 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[76022], 00:11:53.522 | 99.99th=[82314] 00:11:53.522 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:11:53.522 slat (nsec): min=1499, max=14126k, avg=67233.75, stdev=370776.05 00:11:53.522 clat (usec): min=1096, max=38718, avg=9339.83, stdev=5286.24 00:11:53.522 lat (usec): min=1105, max=39121, avg=9407.06, stdev=5319.75 00:11:53.522 clat percentiles (usec): 00:11:53.522 | 1.00th=[ 2507], 5.00th=[ 4080], 10.00th=[ 4686], 20.00th=[ 6063], 00:11:53.522 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7504], 00:11:53.522 | 70.00th=[11469], 80.00th=[13829], 90.00th=[15533], 95.00th=[16581], 00:11:53.522 | 99.00th=[35390], 99.50th=[36439], 99.90th=[38011], 99.95th=[38536], 00:11:53.522 | 99.99th=[38536] 00:11:53.522 bw ( KiB/s): min=19640, max=33608, per=27.44%, avg=26624.00, stdev=9876.87, samples=2 00:11:53.522 iops : min= 4910, max= 8402, avg=6656.00, stdev=2469.22, samples=2 00:11:53.523 lat (msec) : 2=0.15%, 4=2.47%, 10=65.27%, 20=30.14%, 50=1.00% 00:11:53.523 lat (msec) : 100=0.97% 00:11:53.523 cpu : usr=3.19%, sys=4.98%, ctx=744, majf=0, minf=3 00:11:53.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:53.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.523 issued rwts: total=6396,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.523 job2: (groupid=0, jobs=1): err= 0: pid=3762266: Wed Nov 20 14:32:00 2024 00:11:53.523 read: IOPS=6647, BW=26.0MiB/s (27.2MB/s)(26.1MiB/1007msec) 00:11:53.523 slat (nsec): min=987, max=7589.2k, avg=70508.56, stdev=532458.70 00:11:53.523 clat (usec): min=2665, max=19428, avg=8825.24, stdev=2427.50 00:11:53.523 lat (usec): min=2677, max=19432, avg=8895.75, stdev=2467.95 00:11:53.523 clat percentiles (usec): 00:11:53.523 | 1.00th=[ 4113], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7111], 00:11:53.523 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8586], 00:11:53.523 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[12649], 95.00th=[13304], 00:11:53.523 | 99.00th=[16712], 99.50th=[17957], 99.90th=[19530], 99.95th=[19530], 00:11:53.523 | 99.99th=[19530] 00:11:53.523 write: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec); 0 zone resets 00:11:53.523 slat (nsec): min=1636, max=17315k, avg=69795.88, stdev=428868.36 00:11:53.523 clat (usec): min=1420, max=59065, avg=9566.41, stdev=6987.61 00:11:53.523 lat (usec): min=1427, max=59068, avg=9636.21, stdev=7032.78 00:11:53.523 clat percentiles (usec): 00:11:53.523 | 1.00th=[ 2540], 5.00th=[ 4080], 10.00th=[ 5276], 20.00th=[ 6587], 00:11:53.523 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8029], 00:11:53.523 | 70.00th=[ 8225], 80.00th=[11731], 90.00th=[14746], 95.00th=[18482], 00:11:53.523 | 99.00th=[46400], 99.50th=[56361], 99.90th=[57934], 99.95th=[58983], 00:11:53.523 | 99.99th=[58983] 00:11:53.523 bw ( KiB/s): min=26488, max=30144, per=29.19%, avg=28316.00, stdev=2585.18, samples=2 00:11:53.523 iops : min= 6622, max= 7536, avg=7079.00, stdev=646.30, samples=2 00:11:53.523 lat (msec) : 2=0.18%, 4=2.50%, 10=74.42%, 20=20.50%, 50=1.96% 00:11:53.523 lat (msec) : 100=0.44% 00:11:53.523 cpu : usr=4.37%, sys=4.97%, ctx=758, majf=0, minf=1 00:11:53.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:53.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.523 issued rwts: total=6694,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.523 job3: (groupid=0, jobs=1): err= 0: pid=3762274: Wed Nov 20 14:32:00 2024 00:11:53.523 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:11:53.523 slat (nsec): min=940, max=15387k, avg=123435.71, stdev=825468.95 00:11:53.523 clat (usec): min=3925, max=74810, avg=15009.73, stdev=8028.42 00:11:53.523 lat (usec): min=3934, max=74817, avg=15133.16, stdev=8119.02 00:11:53.523 clat percentiles (usec): 00:11:53.523 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7963], 00:11:53.523 | 30.00th=[10290], 40.00th=[13566], 50.00th=[14222], 60.00th=[15139], 00:11:53.523 | 70.00th=[17433], 80.00th=[18744], 90.00th=[20579], 95.00th=[27657], 00:11:53.523 | 99.00th=[56886], 99.50th=[67634], 99.90th=[74974], 99.95th=[74974], 00:11:53.523 | 99.99th=[74974] 00:11:53.523 write: IOPS=4276, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1005msec); 0 zone resets 00:11:53.523 slat (nsec): min=1556, max=17909k, avg=105071.55, stdev=531595.68 00:11:53.523 clat (usec): min=1115, max=74789, avg=15359.70, stdev=9633.18 00:11:53.523 lat (usec): min=1124, max=74798, avg=15464.77, stdev=9678.46 00:11:53.523 clat percentiles (usec): 00:11:53.523 | 1.00th=[ 2409], 5.00th=[ 4686], 10.00th=[ 6259], 20.00th=[ 6915], 00:11:53.523 | 30.00th=[10421], 40.00th=[12911], 50.00th=[14222], 60.00th=[15270], 00:11:53.523 | 70.00th=[17433], 80.00th=[22152], 90.00th=[25297], 95.00th=[31327], 00:11:53.523 | 99.00th=[62129], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:11:53.523 | 99.99th=[74974] 00:11:53.523 bw ( KiB/s): min=14536, max=18832, per=17.20%, avg=16684.00, stdev=3037.73, samples=2 00:11:53.523 iops : min= 3634, max= 4708, avg=4171.00, stdev=759.43, samples=2 00:11:53.523 lat (msec) : 2=0.43%, 4=1.37%, 10=27.03%, 20=53.31%, 50=16.44% 00:11:53.523 lat (msec) : 100=1.42% 00:11:53.523 cpu : usr=1.89%, sys=4.08%, ctx=469, majf=0, minf=1 00:11:53.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:53.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.523 issued rwts: total=4096,4298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.523 00:11:53.523 Run status group 0 (all jobs): 00:11:53.523 READ: bw=90.5MiB/s (94.9MB/s), 15.9MiB/s-26.0MiB/s (16.7MB/s-27.2MB/s), io=91.1MiB (95.6MB), run=1005-1007msec 00:11:53.523 WRITE: bw=94.7MiB/s (99.3MB/s), 16.7MiB/s-27.8MiB/s (17.5MB/s-29.2MB/s), io=95.4MiB (100MB), run=1005-1007msec 00:11:53.523 00:11:53.523 Disk stats (read/write): 00:11:53.523 nvme0n1: ios=5203/5632, merge=0/0, ticks=31551/32257, in_queue=63808, util=96.39% 00:11:53.523 nvme0n2: ios=4883/5120, merge=0/0, ticks=34633/32667, in_queue=67300, util=95.01% 00:11:53.523 nvme0n3: ios=5518/5632, merge=0/0, ticks=47675/55598, in_queue=103273, util=97.26% 00:11:53.523 nvme0n4: ios=3584/3607, merge=0/0, ticks=31832/33943, in_queue=65775, util=89.43% 00:11:53.523 14:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:53.523 14:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3762371 00:11:53.523 14:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:53.523 14:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:53.523 [global] 00:11:53.523 thread=1 00:11:53.523 invalidate=1 00:11:53.523 rw=read 00:11:53.523 time_based=1 00:11:53.523 runtime=10 00:11:53.523 ioengine=libaio 00:11:53.523 direct=1 00:11:53.523 bs=4096 00:11:53.523 iodepth=1 00:11:53.523 norandommap=1 00:11:53.523 numjobs=1 00:11:53.523 00:11:53.523 [job0] 00:11:53.523 filename=/dev/nvme0n1 00:11:53.523 [job1] 00:11:53.523 filename=/dev/nvme0n2 00:11:53.523 [job2] 00:11:53.523 filename=/dev/nvme0n3 00:11:53.523 [job3] 00:11:53.523 filename=/dev/nvme0n4 00:11:53.523 Could not set queue depth (nvme0n1) 00:11:53.523 Could not set queue depth (nvme0n2) 00:11:53.523 Could not set queue depth (nvme0n3) 00:11:53.523 Could not set queue depth (nvme0n4) 00:11:53.782 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:53.782 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:53.782 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:53.782 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:53.782 fio-3.35 00:11:53.782 Starting 4 threads 00:11:56.320 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:56.580 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:56.580 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2379776, buflen=4096 00:11:56.580 fio: pid=3762790, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:56.580 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9433088, buflen=4096 00:11:56.580 fio: pid=3762781, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:56.580 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:56.580 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:56.840 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=17645568, buflen=4096 00:11:56.840 fio: pid=3762745, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:56.840 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:56.840 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:56.840 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15863808, buflen=4096 00:11:56.840 fio: pid=3762758, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:57.099 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:57.099 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:57.099 00:11:57.099 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3762745: Wed Nov 20 14:32:03 2024 00:11:57.099 read: IOPS=1447, BW=5790KiB/s (5929kB/s)(16.8MiB/2976msec) 00:11:57.099 slat (usec): min=2, max=15561, avg=26.67, stdev=408.56 00:11:57.099 clat (usec): min=111, max=42144, avg=660.65, stdev=674.29 00:11:57.099 lat (usec): min=114, max=42187, avg=687.33, stdev=788.84 00:11:57.099 clat percentiles (usec): 00:11:57.099 | 1.00th=[ 253], 5.00th=[ 416], 10.00th=[ 453], 20.00th=[ 494], 00:11:57.099 | 30.00th=[ 519], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 578], 00:11:57.099 | 70.00th=[ 619], 80.00th=[ 955], 90.00th=[ 1045], 95.00th=[ 1106], 00:11:57.099 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1303], 99.95th=[ 1401], 00:11:57.099 | 99.99th=[42206] 00:11:57.099 bw ( KiB/s): min= 3912, max= 7392, per=42.01%, avg=5929.60, stdev=1769.86, samples=5 00:11:57.099 iops : min= 978, max= 1848, avg=1482.40, stdev=442.46, samples=5 00:11:57.099 lat (usec) : 250=0.97%, 500=20.84%, 750=51.64%, 1000=11.44% 00:11:57.099 lat (msec) : 2=15.04%, 4=0.02%, 50=0.02% 00:11:57.099 cpu : usr=0.84%, sys=2.79%, ctx=4314, majf=0, minf=1 00:11:57.099 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.100 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.100 issued rwts: total=4309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.100 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3762758: Wed Nov 20 14:32:03 2024 00:11:57.100 read: IOPS=1235, BW=4940KiB/s (5059kB/s)(15.1MiB/3136msec) 00:11:57.100 slat (usec): min=2, max=13424, avg=25.09, stdev=335.57 00:11:57.100 clat (usec): min=204, max=42270, avg=780.70, stdev=1337.19 00:11:57.100 lat (usec): min=215, max=42298, avg=805.79, stdev=1381.08 00:11:57.100 clat percentiles (usec): 00:11:57.100 | 1.00th=[ 326], 5.00th=[ 482], 10.00th=[ 529], 20.00th=[ 611], 00:11:57.100 | 30.00th=[ 660], 40.00th=[ 709], 50.00th=[ 742], 60.00th=[ 791], 00:11:57.100 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 914], 95.00th=[ 996], 00:11:57.100 | 99.00th=[ 1139], 99.50th=[ 1221], 99.90th=[41681], 99.95th=[42206], 00:11:57.100 | 99.99th=[42206] 00:11:57.100 bw ( KiB/s): min= 2938, max= 5688, per=35.77%, avg=5048.33, stdev=1052.86, samples=6 00:11:57.100 iops : min= 734, max= 1422, avg=1262.00, stdev=263.41, samples=6 00:11:57.100 lat (usec) : 250=0.31%, 500=6.63%, 750=44.30%, 1000=44.14% 00:11:57.100 lat (msec) : 2=4.49%, 50=0.10% 00:11:57.100 cpu : usr=0.83%, sys=1.95%, ctx=3881, majf=0, minf=2 00:11:57.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.100 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.100 issued rwts: total=3874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.100 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3762781: Wed Nov 20 14:32:03 2024 00:11:57.100 read: IOPS=817, BW=3268KiB/s (3346kB/s)(9212KiB/2819msec) 00:11:57.100 slat (usec): min=2, max=10875, avg=26.05, stdev=291.31 00:11:57.100 clat (usec): min=436, max=42394, avg=1193.39, stdev=2820.54 00:11:57.100 lat (usec): min=451, max=42418, avg=1219.45, stdev=2835.16 00:11:57.100 clat percentiles (usec): 00:11:57.100 | 1.00th=[ 750], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 930], 00:11:57.100 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:11:57.100 | 70.00th=[ 1029], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[ 1221], 00:11:57.100 | 99.00th=[ 1319], 99.50th=[ 1500], 99.90th=[42206], 99.95th=[42206], 00:11:57.100 | 99.99th=[42206] 00:11:57.100 bw ( KiB/s): min= 1136, max= 4128, per=22.69%, avg=3203.20, stdev=1225.72, samples=5 00:11:57.100 iops : min= 284, max= 1032, avg=800.80, stdev=306.43, samples=5 00:11:57.100 lat (usec) : 500=0.04%, 750=0.95%, 1000=56.03% 00:11:57.100 lat (msec) : 2=42.45%, 50=0.48% 00:11:57.100 cpu : usr=0.92%, sys=2.34%, ctx=2306, majf=0, minf=2 00:11:57.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.100 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.100 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.100 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3762790: Wed Nov 20 14:32:03 2024 00:11:57.100 read: IOPS=216, BW=865KiB/s (886kB/s)(2324KiB/2687msec) 00:11:57.100 slat (nsec): min=1998, max=40314, avg=17265.01, stdev=5995.28 00:11:57.100 clat (usec): min=765, max=43373, avg=4602.18, stdev=11494.10 00:11:57.100 lat (usec): min=768, max=43402, avg=4619.46, stdev=11496.69 00:11:57.100 clat percentiles (usec): 00:11:57.100 | 1.00th=[ 816], 5.00th=[ 889], 10.00th=[ 947], 20.00th=[ 996], 00:11:57.100 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1123], 00:11:57.100 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1303], 95.00th=[42206], 00:11:57.100 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:11:57.100 | 99.99th=[43254] 00:11:57.100 bw ( KiB/s): min= 96, max= 3168, per=6.53%, avg=921.60, stdev=1288.20, samples=5 00:11:57.100 iops : min= 24, max= 792, avg=230.40, stdev=322.05, samples=5 00:11:57.100 lat (usec) : 1000=21.48% 00:11:57.100 lat (msec) : 2=69.76%, 50=8.59% 00:11:57.100 cpu : usr=0.26%, sys=0.56%, ctx=582, majf=0, minf=2 00:11:57.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.100 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.100 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.100 00:11:57.100 Run status group 0 (all jobs): 00:11:57.100 READ: bw=13.8MiB/s (14.5MB/s), 865KiB/s-5790KiB/s (886kB/s-5929kB/s), io=43.2MiB (45.3MB), run=2687-3136msec 00:11:57.100 00:11:57.100 Disk stats (read/write): 00:11:57.100 nvme0n1: ios=4202/0, merge=0/0, ticks=2639/0, in_queue=2639, util=93.82% 00:11:57.100 nvme0n2: ios=3875/0, merge=0/0, ticks=3066/0, in_queue=3066, util=97.92% 00:11:57.100 nvme0n3: ios=2087/0, merge=0/0, ticks=2392/0, in_queue=2392, util=96.03% 00:11:57.100 nvme0n4: ios=578/0, merge=0/0, ticks=2513/0, in_queue=2513, util=96.42% 00:11:57.100 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:57.100 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:57.360 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:57.360 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:57.360 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:57.360 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:57.620 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:57.620 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3762371 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:57.880 nvmf hotplug test: fio failed as expected 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:57.880 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.140 rmmod nvme_tcp 00:11:58.140 rmmod nvme_fabrics 00:11:58.140 rmmod nvme_keyring 00:11:58.140 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3758806 ']' 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3758806 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3758806 ']' 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3758806 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3758806 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3758806' 00:11:58.140 killing process with pid 3758806 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3758806 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3758806 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.140 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.680 00:12:00.680 real 0m25.735s 00:12:00.680 user 2m12.648s 00:12:00.680 sys 0m7.371s 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.680 ************************************ 00:12:00.680 END TEST nvmf_fio_target 00:12:00.680 ************************************ 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:00.680 ************************************ 00:12:00.680 START TEST nvmf_bdevio 00:12:00.680 ************************************ 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:00.680 * Looking for test storage... 00:12:00.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:00.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.680 --rc genhtml_branch_coverage=1 00:12:00.680 --rc genhtml_function_coverage=1 00:12:00.680 --rc genhtml_legend=1 00:12:00.680 --rc geninfo_all_blocks=1 00:12:00.680 --rc geninfo_unexecuted_blocks=1 00:12:00.680 00:12:00.680 ' 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:00.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.680 --rc genhtml_branch_coverage=1 00:12:00.680 --rc genhtml_function_coverage=1 00:12:00.680 --rc genhtml_legend=1 00:12:00.680 --rc geninfo_all_blocks=1 00:12:00.680 --rc geninfo_unexecuted_blocks=1 00:12:00.680 00:12:00.680 ' 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:00.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.680 --rc genhtml_branch_coverage=1 00:12:00.680 --rc genhtml_function_coverage=1 00:12:00.680 --rc genhtml_legend=1 00:12:00.680 --rc geninfo_all_blocks=1 00:12:00.680 --rc geninfo_unexecuted_blocks=1 00:12:00.680 00:12:00.680 ' 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:00.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.680 --rc genhtml_branch_coverage=1 00:12:00.680 --rc genhtml_function_coverage=1 00:12:00.680 --rc genhtml_legend=1 00:12:00.680 --rc geninfo_all_blocks=1 00:12:00.680 --rc geninfo_unexecuted_blocks=1 00:12:00.680 00:12:00.680 ' 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:00.680 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.681 14:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.961 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.961 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:05.962 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:05.962 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:05.962 Found net devices under 0000:31:00.0: cvl_0_0 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:05.962 Found net devices under 0000:31:00.1: cvl_0_1 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:12:05.962 00:12:05.962 --- 10.0.0.2 ping statistics --- 00:12:05.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.962 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:12:05.962 00:12:05.962 --- 10.0.0.1 ping statistics --- 00:12:05.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.962 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.962 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3768012 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3768012 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3768012 ']' 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:05.963 [2024-11-20 14:32:12.642452] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:12:05.963 [2024-11-20 14:32:12.642504] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.963 [2024-11-20 14:32:12.712876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.963 [2024-11-20 14:32:12.741446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.963 [2024-11-20 14:32:12.741471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.963 [2024-11-20 14:32:12.741478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.963 [2024-11-20 14:32:12.741482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.963 [2024-11-20 14:32:12.741486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.963 [2024-11-20 14:32:12.742813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:05.963 [2024-11-20 14:32:12.742969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:05.963 [2024-11-20 14:32:12.743117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.963 [2024-11-20 14:32:12.743119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.963 [2024-11-20 14:32:12.845195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.963 Malloc0 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.963 [2024-11-20 14:32:12.897057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:05.963 { 00:12:05.963 "params": { 00:12:05.963 "name": "Nvme$subsystem", 00:12:05.963 "trtype": "$TEST_TRANSPORT", 00:12:05.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.963 "adrfam": "ipv4", 00:12:05.963 "trsvcid": "$NVMF_PORT", 00:12:05.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.963 "hdgst": ${hdgst:-false}, 00:12:05.963 "ddgst": ${ddgst:-false} 00:12:05.963 }, 00:12:05.963 "method": "bdev_nvme_attach_controller" 00:12:05.963 } 00:12:05.963 EOF 00:12:05.963 )") 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:05.963 14:32:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:05.963 "params": { 00:12:05.963 "name": "Nvme1", 00:12:05.963 "trtype": "tcp", 00:12:05.963 "traddr": "10.0.0.2", 00:12:05.963 "adrfam": "ipv4", 00:12:05.963 "trsvcid": "4420", 00:12:05.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.963 "hdgst": false, 00:12:05.963 "ddgst": false 00:12:05.963 }, 00:12:05.963 "method": "bdev_nvme_attach_controller" 00:12:05.963 }' 00:12:05.963 [2024-11-20 14:32:12.936357] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:12:05.963 [2024-11-20 14:32:12.936409] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768215 ] 00:12:05.963 [2024-11-20 14:32:13.014884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:06.224 [2024-11-20 14:32:13.053365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.224 [2024-11-20 14:32:13.053520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.224 [2024-11-20 14:32:13.053521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.484 I/O targets: 00:12:06.484 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:06.484 00:12:06.484 00:12:06.484 CUnit - A unit testing framework for C - Version 2.1-3 00:12:06.484 http://cunit.sourceforge.net/ 00:12:06.484 00:12:06.484 00:12:06.484 Suite: bdevio tests on: Nvme1n1 00:12:06.484 Test: blockdev write read block ...passed 00:12:06.484 Test: blockdev write zeroes read block ...passed 00:12:06.484 Test: blockdev write zeroes read no split ...passed 00:12:06.484 Test: blockdev write zeroes read split ...passed 00:12:06.484 Test: blockdev write zeroes read split partial ...passed 00:12:06.484 Test: blockdev reset ...[2024-11-20 14:32:13.430960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:06.484 [2024-11-20 14:32:13.431027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20864b0 (9): Bad file descriptor 00:12:06.484 [2024-11-20 14:32:13.458992] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:06.484 passed 00:12:06.484 Test: blockdev write read 8 blocks ...passed 00:12:06.484 Test: blockdev write read size > 128k ...passed 00:12:06.484 Test: blockdev write read invalid size ...passed 00:12:06.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:06.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:06.484 Test: blockdev write read max offset ...passed 00:12:06.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:06.745 Test: blockdev writev readv 8 blocks ...passed 00:12:06.745 Test: blockdev writev readv 30 x 1block ...passed 00:12:06.745 Test: blockdev writev readv block ...passed 00:12:06.745 Test: blockdev writev readv size > 128k ...passed 00:12:06.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:06.745 Test: blockdev comparev and writev ...[2024-11-20 14:32:13.682820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.745 [2024-11-20 14:32:13.682846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.682857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.745 [2024-11-20 14:32:13.682863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.683308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.745 [2024-11-20 14:32:13.683318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.683328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.745 [2024-11-20 14:32:13.683334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.683782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.745 [2024-11-20 14:32:13.683791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.683801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.745 [2024-11-20 14:32:13.683807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.684241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.745 [2024-11-20 14:32:13.684255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.684265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.745 [2024-11-20 14:32:13.684270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:06.745 passed 00:12:06.745 Test: blockdev nvme passthru rw ...passed 00:12:06.745 Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:32:13.768144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:06.745 [2024-11-20 14:32:13.768154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.768466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:06.745 [2024-11-20 14:32:13.768475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.768799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:06.745 [2024-11-20 14:32:13.768807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:06.745 [2024-11-20 14:32:13.769140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:06.745 [2024-11-20 14:32:13.769149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:06.745 passed 00:12:06.745 Test: blockdev nvme admin passthru ...passed 00:12:07.005 Test: blockdev copy ...passed 00:12:07.005 00:12:07.005 Run Summary: Type Total Ran Passed Failed Inactive 00:12:07.005 suites 1 1 n/a 0 0 00:12:07.005 tests 23 23 23 0 0 00:12:07.005 asserts 152 152 152 0 n/a 00:12:07.005 00:12:07.005 Elapsed time = 1.010 seconds 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.005 14:32:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.005 rmmod nvme_tcp 00:12:07.005 rmmod nvme_fabrics 00:12:07.005 rmmod nvme_keyring 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3768012 ']' 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3768012 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3768012 ']' 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3768012 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768012 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768012' 00:12:07.005 killing process with pid 3768012 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3768012 00:12:07.005 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3768012 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.263 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.169 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:09.169 00:12:09.169 real 0m8.966s 00:12:09.169 user 0m9.047s 00:12:09.169 sys 0m4.364s 00:12:09.169 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.169 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.169 ************************************ 00:12:09.169 END TEST nvmf_bdevio 00:12:09.169 ************************************ 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:09.430 00:12:09.430 real 4m27.642s 00:12:09.430 user 10m48.313s 00:12:09.430 sys 1m26.607s 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:09.430 ************************************ 00:12:09.430 END TEST nvmf_target_core 00:12:09.430 ************************************ 00:12:09.430 14:32:16 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:09.430 14:32:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.430 14:32:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.430 14:32:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.430 ************************************ 00:12:09.430 START TEST nvmf_target_extra 00:12:09.430 ************************************ 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:09.430 * Looking for test storage... 00:12:09.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.430 --rc genhtml_branch_coverage=1 00:12:09.430 --rc genhtml_function_coverage=1 00:12:09.430 --rc genhtml_legend=1 00:12:09.430 --rc geninfo_all_blocks=1 00:12:09.430 --rc geninfo_unexecuted_blocks=1 00:12:09.430 00:12:09.430 ' 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.430 --rc genhtml_branch_coverage=1 00:12:09.430 --rc genhtml_function_coverage=1 00:12:09.430 --rc genhtml_legend=1 00:12:09.430 --rc geninfo_all_blocks=1 00:12:09.430 --rc geninfo_unexecuted_blocks=1 00:12:09.430 00:12:09.430 ' 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.430 --rc genhtml_branch_coverage=1 00:12:09.430 --rc genhtml_function_coverage=1 00:12:09.430 --rc genhtml_legend=1 00:12:09.430 --rc geninfo_all_blocks=1 00:12:09.430 --rc geninfo_unexecuted_blocks=1 00:12:09.430 00:12:09.430 ' 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.430 --rc genhtml_branch_coverage=1 00:12:09.430 --rc genhtml_function_coverage=1 00:12:09.430 --rc genhtml_legend=1 00:12:09.430 --rc geninfo_all_blocks=1 00:12:09.430 --rc geninfo_unexecuted_blocks=1 00:12:09.430 00:12:09.430 ' 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.430 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.431 ************************************ 00:12:09.431 START TEST nvmf_example 00:12:09.431 ************************************ 00:12:09.431 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:09.692 * Looking for test storage... 00:12:09.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:09.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.692 --rc genhtml_branch_coverage=1 00:12:09.692 --rc genhtml_function_coverage=1 00:12:09.692 --rc genhtml_legend=1 00:12:09.692 --rc geninfo_all_blocks=1 00:12:09.692 --rc geninfo_unexecuted_blocks=1 00:12:09.692 00:12:09.692 ' 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:09.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.692 --rc genhtml_branch_coverage=1 00:12:09.692 --rc genhtml_function_coverage=1 00:12:09.692 --rc genhtml_legend=1 00:12:09.692 --rc geninfo_all_blocks=1 00:12:09.692 --rc geninfo_unexecuted_blocks=1 00:12:09.692 00:12:09.692 ' 00:12:09.692 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:09.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.693 --rc genhtml_branch_coverage=1 00:12:09.693 --rc genhtml_function_coverage=1 00:12:09.693 --rc genhtml_legend=1 00:12:09.693 --rc geninfo_all_blocks=1 00:12:09.693 --rc geninfo_unexecuted_blocks=1 00:12:09.693 00:12:09.693 ' 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:09.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.693 --rc genhtml_branch_coverage=1 00:12:09.693 --rc genhtml_function_coverage=1 00:12:09.693 --rc genhtml_legend=1 00:12:09.693 --rc geninfo_all_blocks=1 00:12:09.693 --rc geninfo_unexecuted_blocks=1 00:12:09.693 00:12:09.693 ' 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:09.693 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:15.134 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:15.134 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:15.134 Found net devices under 0000:31:00.0: cvl_0_0 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:15.134 Found net devices under 0000:31:00.1: cvl_0_1 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.134 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.135 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:12:15.135 00:12:15.135 --- 10.0.0.2 ping statistics --- 00:12:15.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.135 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:12:15.135 00:12:15.135 --- 10.0.0.1 ping statistics --- 00:12:15.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.135 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3772963 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3772963 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3772963 ']' 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.135 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:16.074 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.074 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:16.074 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:16.074 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.074 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:16.075 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.075 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.075 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:16.075 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.075 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:16.075 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.075 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:16.075 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:28.293 Initializing NVMe Controllers 00:12:28.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:28.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:28.294 Initialization complete. Launching workers. 00:12:28.294 ======================================================== 00:12:28.294 Latency(us) 00:12:28.294 Device Information : IOPS MiB/s Average min max 00:12:28.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19352.53 75.60 3306.66 621.87 16528.01 00:12:28.294 ======================================================== 00:12:28.294 Total : 19352.53 75.60 3306.66 621.87 16528.01 00:12:28.294 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.294 rmmod nvme_tcp 00:12:28.294 rmmod nvme_fabrics 00:12:28.294 rmmod nvme_keyring 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3772963 ']' 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3772963 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3772963 ']' 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3772963 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3772963 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3772963' 00:12:28.294 killing process with pid 3772963 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3772963 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3772963 00:12:28.294 nvmf threads initialize successfully 00:12:28.294 bdev subsystem init successfully 00:12:28.294 created a nvmf target service 00:12:28.294 create targets's poll groups done 00:12:28.294 all subsystems of target started 00:12:28.294 nvmf target is running 00:12:28.294 all subsystems of target stopped 00:12:28.294 destroy targets's poll groups done 00:12:28.294 destroyed the nvmf target service 00:12:28.294 bdev subsystem finish successfully 00:12:28.294 nvmf threads destroy successfully 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.294 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.554 00:12:28.554 real 0m19.076s 00:12:28.554 user 0m45.451s 00:12:28.554 sys 0m5.295s 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.554 ************************************ 00:12:28.554 END TEST nvmf_example 00:12:28.554 ************************************ 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.554 ************************************ 00:12:28.554 START TEST nvmf_filesystem 00:12:28.554 ************************************ 00:12:28.554 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:28.817 * Looking for test storage... 00:12:28.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:28.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.817 --rc genhtml_branch_coverage=1 00:12:28.817 --rc genhtml_function_coverage=1 00:12:28.817 --rc genhtml_legend=1 00:12:28.817 --rc geninfo_all_blocks=1 00:12:28.817 --rc geninfo_unexecuted_blocks=1 00:12:28.817 00:12:28.817 ' 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:28.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.817 --rc genhtml_branch_coverage=1 00:12:28.817 --rc genhtml_function_coverage=1 00:12:28.817 --rc genhtml_legend=1 00:12:28.817 --rc geninfo_all_blocks=1 00:12:28.817 --rc geninfo_unexecuted_blocks=1 00:12:28.817 00:12:28.817 ' 00:12:28.817 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:28.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.817 --rc genhtml_branch_coverage=1 00:12:28.817 --rc genhtml_function_coverage=1 00:12:28.818 --rc genhtml_legend=1 00:12:28.818 --rc geninfo_all_blocks=1 00:12:28.818 --rc geninfo_unexecuted_blocks=1 00:12:28.818 00:12:28.818 ' 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:28.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.818 --rc genhtml_branch_coverage=1 00:12:28.818 --rc genhtml_function_coverage=1 00:12:28.818 --rc genhtml_legend=1 00:12:28.818 --rc geninfo_all_blocks=1 00:12:28.818 --rc geninfo_unexecuted_blocks=1 00:12:28.818 00:12:28.818 ' 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:28.818 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:28.819 #define SPDK_CONFIG_H 00:12:28.819 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:28.819 #define SPDK_CONFIG_APPS 1 00:12:28.819 #define SPDK_CONFIG_ARCH native 00:12:28.819 #undef SPDK_CONFIG_ASAN 00:12:28.819 #undef SPDK_CONFIG_AVAHI 00:12:28.819 #undef SPDK_CONFIG_CET 00:12:28.819 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:28.819 #define SPDK_CONFIG_COVERAGE 1 00:12:28.819 #define SPDK_CONFIG_CROSS_PREFIX 00:12:28.819 #undef SPDK_CONFIG_CRYPTO 00:12:28.819 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:28.819 #undef SPDK_CONFIG_CUSTOMOCF 00:12:28.819 #undef SPDK_CONFIG_DAOS 00:12:28.819 #define SPDK_CONFIG_DAOS_DIR 00:12:28.819 #define SPDK_CONFIG_DEBUG 1 00:12:28.819 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:28.819 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:28.819 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:28.819 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:28.819 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:28.819 #undef SPDK_CONFIG_DPDK_UADK 00:12:28.819 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:28.819 #define SPDK_CONFIG_EXAMPLES 1 00:12:28.819 #undef SPDK_CONFIG_FC 00:12:28.819 #define SPDK_CONFIG_FC_PATH 00:12:28.819 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:28.819 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:28.819 #define SPDK_CONFIG_FSDEV 1 00:12:28.819 #undef SPDK_CONFIG_FUSE 00:12:28.819 #undef SPDK_CONFIG_FUZZER 00:12:28.819 #define SPDK_CONFIG_FUZZER_LIB 00:12:28.819 #undef SPDK_CONFIG_GOLANG 00:12:28.819 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:28.819 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:28.819 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:28.819 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:28.819 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:28.819 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:28.819 #undef SPDK_CONFIG_HAVE_LZ4 00:12:28.819 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:28.819 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:28.819 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:28.819 #define SPDK_CONFIG_IDXD 1 00:12:28.819 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:28.819 #undef SPDK_CONFIG_IPSEC_MB 00:12:28.819 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:28.819 #define SPDK_CONFIG_ISAL 1 00:12:28.819 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:28.819 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:28.819 #define SPDK_CONFIG_LIBDIR 00:12:28.819 #undef SPDK_CONFIG_LTO 00:12:28.819 #define SPDK_CONFIG_MAX_LCORES 128 00:12:28.819 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:28.819 #define SPDK_CONFIG_NVME_CUSE 1 00:12:28.819 #undef SPDK_CONFIG_OCF 00:12:28.819 #define SPDK_CONFIG_OCF_PATH 00:12:28.819 #define SPDK_CONFIG_OPENSSL_PATH 00:12:28.819 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:28.819 #define SPDK_CONFIG_PGO_DIR 00:12:28.819 #undef SPDK_CONFIG_PGO_USE 00:12:28.819 #define SPDK_CONFIG_PREFIX /usr/local 00:12:28.819 #undef SPDK_CONFIG_RAID5F 00:12:28.819 #undef SPDK_CONFIG_RBD 00:12:28.819 #define SPDK_CONFIG_RDMA 1 00:12:28.819 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:28.819 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:28.819 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:28.819 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:28.819 #define SPDK_CONFIG_SHARED 1 00:12:28.819 #undef SPDK_CONFIG_SMA 00:12:28.819 #define SPDK_CONFIG_TESTS 1 00:12:28.819 #undef SPDK_CONFIG_TSAN 00:12:28.819 #define SPDK_CONFIG_UBLK 1 00:12:28.819 #define SPDK_CONFIG_UBSAN 1 00:12:28.819 #undef SPDK_CONFIG_UNIT_TESTS 00:12:28.819 #undef SPDK_CONFIG_URING 00:12:28.819 #define SPDK_CONFIG_URING_PATH 00:12:28.819 #undef SPDK_CONFIG_URING_ZNS 00:12:28.819 #undef SPDK_CONFIG_USDT 00:12:28.819 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:28.819 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:28.819 #define SPDK_CONFIG_VFIO_USER 1 00:12:28.819 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:28.819 #define SPDK_CONFIG_VHOST 1 00:12:28.819 #define SPDK_CONFIG_VIRTIO 1 00:12:28.819 #undef SPDK_CONFIG_VTUNE 00:12:28.819 #define SPDK_CONFIG_VTUNE_DIR 00:12:28.819 #define SPDK_CONFIG_WERROR 1 00:12:28.819 #define SPDK_CONFIG_WPDK_DIR 00:12:28.819 #undef SPDK_CONFIG_XNVME 00:12:28.819 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:28.819 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:28.820 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:28.821 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3776072 ]] 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3776072 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.zSPygB 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zSPygB/tests/target /tmp/spdk.zSPygB 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122962681856 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356517376 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6393835520 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668225536 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847713792 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23592960 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=349184 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=154624 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677777408 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678260736 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=483328 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:28.822 * Looking for test storage... 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:28.822 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122962681856 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8608428032 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:28.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.823 --rc genhtml_branch_coverage=1 00:12:28.823 --rc genhtml_function_coverage=1 00:12:28.823 --rc genhtml_legend=1 00:12:28.823 --rc geninfo_all_blocks=1 00:12:28.823 --rc geninfo_unexecuted_blocks=1 00:12:28.823 00:12:28.823 ' 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:28.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.823 --rc genhtml_branch_coverage=1 00:12:28.823 --rc genhtml_function_coverage=1 00:12:28.823 --rc genhtml_legend=1 00:12:28.823 --rc geninfo_all_blocks=1 00:12:28.823 --rc geninfo_unexecuted_blocks=1 00:12:28.823 00:12:28.823 ' 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:28.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.823 --rc genhtml_branch_coverage=1 00:12:28.823 --rc genhtml_function_coverage=1 00:12:28.823 --rc genhtml_legend=1 00:12:28.823 --rc geninfo_all_blocks=1 00:12:28.823 --rc geninfo_unexecuted_blocks=1 00:12:28.823 00:12:28.823 ' 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:28.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.823 --rc genhtml_branch_coverage=1 00:12:28.823 --rc genhtml_function_coverage=1 00:12:28.823 --rc genhtml_legend=1 00:12:28.823 --rc geninfo_all_blocks=1 00:12:28.823 --rc geninfo_unexecuted_blocks=1 00:12:28.823 00:12:28.823 ' 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:28.823 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:29.083 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.084 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:34.363 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:34.363 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.363 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:34.364 Found net devices under 0000:31:00.0: cvl_0_0 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:34.364 Found net devices under 0000:31:00.1: cvl_0_1 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.364 14:32:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:34.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:12:34.364 00:12:34.364 --- 10.0.0.2 ping statistics --- 00:12:34.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.364 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:12:34.364 00:12:34.364 --- 10.0.0.1 ping statistics --- 00:12:34.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.364 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:34.364 ************************************ 00:12:34.364 START TEST nvmf_filesystem_no_in_capsule 00:12:34.364 ************************************ 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3779833 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3779833 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3779833 ']' 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.364 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.364 [2024-11-20 14:32:41.251097] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:12:34.364 [2024-11-20 14:32:41.251145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.364 [2024-11-20 14:32:41.320529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.364 [2024-11-20 14:32:41.350876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.364 [2024-11-20 14:32:41.350904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.364 [2024-11-20 14:32:41.350910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.364 [2024-11-20 14:32:41.350915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.364 [2024-11-20 14:32:41.350919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.364 [2024-11-20 14:32:41.352446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.364 [2024-11-20 14:32:41.352602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.364 [2024-11-20 14:32:41.352720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.364 [2024-11-20 14:32:41.352722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 [2024-11-20 14:32:41.453446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 Malloc1 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 [2024-11-20 14:32:41.581338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.625 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:34.625 { 00:12:34.625 "name": "Malloc1", 00:12:34.625 "aliases": [ 00:12:34.625 "ec7422b5-97f0-4bda-9c3a-de0354bbfec9" 00:12:34.625 ], 00:12:34.625 "product_name": "Malloc disk", 00:12:34.625 "block_size": 512, 00:12:34.625 "num_blocks": 1048576, 00:12:34.625 "uuid": "ec7422b5-97f0-4bda-9c3a-de0354bbfec9", 00:12:34.625 "assigned_rate_limits": { 00:12:34.625 "rw_ios_per_sec": 0, 00:12:34.625 "rw_mbytes_per_sec": 0, 00:12:34.625 "r_mbytes_per_sec": 0, 00:12:34.625 "w_mbytes_per_sec": 0 00:12:34.625 }, 00:12:34.625 "claimed": true, 00:12:34.625 "claim_type": "exclusive_write", 00:12:34.625 "zoned": false, 00:12:34.625 "supported_io_types": { 00:12:34.625 "read": true, 00:12:34.625 "write": true, 00:12:34.625 "unmap": true, 00:12:34.625 "flush": true, 00:12:34.625 "reset": true, 00:12:34.625 "nvme_admin": false, 00:12:34.625 "nvme_io": false, 00:12:34.625 "nvme_io_md": false, 00:12:34.625 "write_zeroes": true, 00:12:34.625 "zcopy": true, 00:12:34.625 "get_zone_info": false, 00:12:34.625 "zone_management": false, 00:12:34.625 "zone_append": false, 00:12:34.625 "compare": false, 00:12:34.625 "compare_and_write": false, 00:12:34.626 "abort": true, 00:12:34.626 "seek_hole": false, 00:12:34.626 "seek_data": false, 00:12:34.626 "copy": true, 00:12:34.626 "nvme_iov_md": false 00:12:34.626 }, 00:12:34.626 "memory_domains": [ 00:12:34.626 { 00:12:34.626 "dma_device_id": "system", 00:12:34.626 "dma_device_type": 1 00:12:34.626 }, 00:12:34.626 { 00:12:34.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.626 "dma_device_type": 2 00:12:34.626 } 00:12:34.626 ], 00:12:34.626 "driver_specific": {} 00:12:34.626 } 00:12:34.626 ]' 00:12:34.626 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:34.626 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:34.626 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:34.626 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:34.626 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:34.626 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:34.626 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:34.626 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.531 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.531 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:36.531 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.531 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:36.531 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:38.435 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:38.694 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.070 ************************************ 00:12:40.070 START TEST filesystem_ext4 00:12:40.070 ************************************ 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:40.070 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:40.070 mke2fs 1.47.0 (5-Feb-2023) 00:12:40.070 Discarding device blocks: 0/522240 done 00:12:40.070 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:40.070 Filesystem UUID: 6c663c63-4908-4658-912e-b4128c399605 00:12:40.070 Superblock backups stored on blocks: 00:12:40.070 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:40.070 00:12:40.070 Allocating group tables: 0/64 done 00:12:40.070 Writing inode tables: 0/64 done 00:12:40.070 Creating journal (8192 blocks): done 00:12:42.551 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:12:42.551 00:12:42.551 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:42.551 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3779833 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:49.124 00:12:49.124 real 0m8.713s 00:12:49.124 user 0m0.013s 00:12:49.124 sys 0m0.042s 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:49.124 ************************************ 00:12:49.124 END TEST filesystem_ext4 00:12:49.124 ************************************ 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.124 ************************************ 00:12:49.124 START TEST filesystem_btrfs 00:12:49.124 ************************************ 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:49.124 btrfs-progs v6.8.1 00:12:49.124 See https://btrfs.readthedocs.io for more information. 00:12:49.124 00:12:49.124 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:49.124 NOTE: several default settings have changed in version 5.15, please make sure 00:12:49.124 this does not affect your deployments: 00:12:49.124 - DUP for metadata (-m dup) 00:12:49.124 - enabled no-holes (-O no-holes) 00:12:49.124 - enabled free-space-tree (-R free-space-tree) 00:12:49.124 00:12:49.124 Label: (null) 00:12:49.124 UUID: 69d497ff-1e33-4105-ac28-a8c51a36a350 00:12:49.124 Node size: 16384 00:12:49.124 Sector size: 4096 (CPU page size: 4096) 00:12:49.124 Filesystem size: 510.00MiB 00:12:49.124 Block group profiles: 00:12:49.124 Data: single 8.00MiB 00:12:49.124 Metadata: DUP 32.00MiB 00:12:49.124 System: DUP 8.00MiB 00:12:49.124 SSD detected: yes 00:12:49.124 Zoned device: no 00:12:49.124 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:49.124 Checksum: crc32c 00:12:49.124 Number of devices: 1 00:12:49.124 Devices: 00:12:49.124 ID SIZE PATH 00:12:49.124 1 510.00MiB /dev/nvme0n1p1 00:12:49.124 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3779833 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:49.124 00:12:49.124 real 0m0.341s 00:12:49.124 user 0m0.017s 00:12:49.124 sys 0m0.034s 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:49.124 ************************************ 00:12:49.124 END TEST filesystem_btrfs 00:12:49.124 ************************************ 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.124 ************************************ 00:12:49.124 START TEST filesystem_xfs 00:12:49.124 ************************************ 00:12:49.124 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:49.125 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:49.383 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:49.383 = sectsz=512 attr=2, projid32bit=1 00:12:49.383 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:49.383 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:49.383 data = bsize=4096 blocks=130560, imaxpct=25 00:12:49.383 = sunit=0 swidth=0 blks 00:12:49.383 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:49.383 log =internal log bsize=4096 blocks=16384, version=2 00:12:49.383 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:49.383 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:50.321 Discarding blocks...Done. 00:12:50.322 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:50.322 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:52.861 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:52.861 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3779833 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:52.862 00:12:52.862 real 0m3.864s 00:12:52.862 user 0m0.010s 00:12:52.862 sys 0m0.043s 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:52.862 ************************************ 00:12:52.862 END TEST filesystem_xfs 00:12:52.862 ************************************ 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:52.862 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3779833 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3779833 ']' 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3779833 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3779833 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3779833' 00:12:53.205 killing process with pid 3779833 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3779833 00:12:53.205 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3779833 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:53.465 00:12:53.465 real 0m19.068s 00:12:53.465 user 1m15.249s 00:12:53.465 sys 0m1.010s 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.465 ************************************ 00:12:53.465 END TEST nvmf_filesystem_no_in_capsule 00:12:53.465 ************************************ 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:53.465 ************************************ 00:12:53.465 START TEST nvmf_filesystem_in_capsule 00:12:53.465 ************************************ 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3784317 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3784317 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3784317 ']' 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.465 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.465 [2024-11-20 14:33:00.366635] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:12:53.465 [2024-11-20 14:33:00.366685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.465 [2024-11-20 14:33:00.437570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.465 [2024-11-20 14:33:00.467487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.465 [2024-11-20 14:33:00.467514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.465 [2024-11-20 14:33:00.467519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.465 [2024-11-20 14:33:00.467524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.465 [2024-11-20 14:33:00.467529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.465 [2024-11-20 14:33:00.468800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.465 [2024-11-20 14:33:00.468921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.465 [2024-11-20 14:33:00.469038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.465 [2024-11-20 14:33:00.469040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 [2024-11-20 14:33:00.573404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 Malloc1 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 [2024-11-20 14:33:00.689149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.724 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:53.724 { 00:12:53.725 "name": "Malloc1", 00:12:53.725 "aliases": [ 00:12:53.725 "8e93a95f-4209-451d-9d95-9c17874bd9d9" 00:12:53.725 ], 00:12:53.725 "product_name": "Malloc disk", 00:12:53.725 "block_size": 512, 00:12:53.725 "num_blocks": 1048576, 00:12:53.725 "uuid": "8e93a95f-4209-451d-9d95-9c17874bd9d9", 00:12:53.725 "assigned_rate_limits": { 00:12:53.725 "rw_ios_per_sec": 0, 00:12:53.725 "rw_mbytes_per_sec": 0, 00:12:53.725 "r_mbytes_per_sec": 0, 00:12:53.725 "w_mbytes_per_sec": 0 00:12:53.725 }, 00:12:53.725 "claimed": true, 00:12:53.725 "claim_type": "exclusive_write", 00:12:53.725 "zoned": false, 00:12:53.725 "supported_io_types": { 00:12:53.725 "read": true, 00:12:53.725 "write": true, 00:12:53.725 "unmap": true, 00:12:53.725 "flush": true, 00:12:53.725 "reset": true, 00:12:53.725 "nvme_admin": false, 00:12:53.725 "nvme_io": false, 00:12:53.725 "nvme_io_md": false, 00:12:53.725 "write_zeroes": true, 00:12:53.725 "zcopy": true, 00:12:53.725 "get_zone_info": false, 00:12:53.725 "zone_management": false, 00:12:53.725 "zone_append": false, 00:12:53.725 "compare": false, 00:12:53.725 "compare_and_write": false, 00:12:53.725 "abort": true, 00:12:53.725 "seek_hole": false, 00:12:53.725 "seek_data": false, 00:12:53.725 "copy": true, 00:12:53.725 "nvme_iov_md": false 00:12:53.725 }, 00:12:53.725 "memory_domains": [ 00:12:53.725 { 00:12:53.725 "dma_device_id": "system", 00:12:53.725 "dma_device_type": 1 00:12:53.725 }, 00:12:53.725 { 00:12:53.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.725 "dma_device_type": 2 00:12:53.725 } 00:12:53.725 ], 00:12:53.725 "driver_specific": {} 00:12:53.725 } 00:12:53.725 ]' 00:12:53.725 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:53.725 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:53.725 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:53.725 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:53.725 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:53.725 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:53.725 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:53.725 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.630 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.630 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:55.630 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.630 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:55.630 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:57.535 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:57.795 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:58.054 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:58.994 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:58.994 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:58.994 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:58.994 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.994 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.254 ************************************ 00:12:59.254 START TEST filesystem_in_capsule_ext4 00:12:59.254 ************************************ 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:59.254 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:59.254 mke2fs 1.47.0 (5-Feb-2023) 00:12:59.254 Discarding device blocks: 0/522240 done 00:12:59.254 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:59.254 Filesystem UUID: 270456a8-c7e9-43a2-8bb3-37c8004b2d5f 00:12:59.254 Superblock backups stored on blocks: 00:12:59.254 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:59.254 00:12:59.254 Allocating group tables: 0/64 done 00:12:59.254 Writing inode tables: 0/64 done 00:13:01.794 Creating journal (8192 blocks): done 00:13:01.794 Writing superblocks and filesystem accounting information: 0/64 done 00:13:01.794 00:13:01.794 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:01.794 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3784317 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:08.368 00:13:08.368 real 0m8.648s 00:13:08.368 user 0m0.012s 00:13:08.368 sys 0m0.042s 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:08.368 ************************************ 00:13:08.368 END TEST filesystem_in_capsule_ext4 00:13:08.368 ************************************ 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.368 ************************************ 00:13:08.368 START TEST filesystem_in_capsule_btrfs 00:13:08.368 ************************************ 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:08.368 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:08.368 btrfs-progs v6.8.1 00:13:08.368 See https://btrfs.readthedocs.io for more information. 00:13:08.368 00:13:08.368 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:08.368 NOTE: several default settings have changed in version 5.15, please make sure 00:13:08.368 this does not affect your deployments: 00:13:08.368 - DUP for metadata (-m dup) 00:13:08.368 - enabled no-holes (-O no-holes) 00:13:08.368 - enabled free-space-tree (-R free-space-tree) 00:13:08.368 00:13:08.368 Label: (null) 00:13:08.368 UUID: 9079cf74-836d-466a-91d8-a35db41517cb 00:13:08.368 Node size: 16384 00:13:08.368 Sector size: 4096 (CPU page size: 4096) 00:13:08.368 Filesystem size: 510.00MiB 00:13:08.368 Block group profiles: 00:13:08.368 Data: single 8.00MiB 00:13:08.368 Metadata: DUP 32.00MiB 00:13:08.368 System: DUP 8.00MiB 00:13:08.369 SSD detected: yes 00:13:08.369 Zoned device: no 00:13:08.369 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:08.369 Checksum: crc32c 00:13:08.369 Number of devices: 1 00:13:08.369 Devices: 00:13:08.369 ID SIZE PATH 00:13:08.369 1 510.00MiB /dev/nvme0n1p1 00:13:08.369 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3784317 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:08.369 00:13:08.369 real 0m0.627s 00:13:08.369 user 0m0.016s 00:13:08.369 sys 0m0.038s 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:08.369 ************************************ 00:13:08.369 END TEST filesystem_in_capsule_btrfs 00:13:08.369 ************************************ 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.369 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.629 ************************************ 00:13:08.629 START TEST filesystem_in_capsule_xfs 00:13:08.629 ************************************ 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:08.629 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:08.629 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:08.629 = sectsz=512 attr=2, projid32bit=1 00:13:08.629 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:08.629 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:08.629 data = bsize=4096 blocks=130560, imaxpct=25 00:13:08.629 = sunit=0 swidth=0 blks 00:13:08.629 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:08.629 log =internal log bsize=4096 blocks=16384, version=2 00:13:08.629 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:08.629 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:09.195 Discarding blocks...Done. 00:13:09.196 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:09.196 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:11.100 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3784317 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:11.100 00:13:11.100 real 0m2.609s 00:13:11.100 user 0m0.011s 00:13:11.100 sys 0m0.041s 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:11.100 ************************************ 00:13:11.100 END TEST filesystem_in_capsule_xfs 00:13:11.100 ************************************ 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:11.100 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3784317 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3784317 ']' 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3784317 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3784317 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3784317' 00:13:11.360 killing process with pid 3784317 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3784317 00:13:11.360 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3784317 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:11.620 00:13:11.620 real 0m18.194s 00:13:11.620 user 1m11.816s 00:13:11.620 sys 0m0.952s 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.620 ************************************ 00:13:11.620 END TEST nvmf_filesystem_in_capsule 00:13:11.620 ************************************ 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.620 rmmod nvme_tcp 00:13:11.620 rmmod nvme_fabrics 00:13:11.620 rmmod nvme_keyring 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.620 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.157 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.157 00:13:14.157 real 0m45.057s 00:13:14.157 user 2m28.631s 00:13:14.157 sys 0m6.073s 00:13:14.157 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.157 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:14.157 ************************************ 00:13:14.157 END TEST nvmf_filesystem 00:13:14.157 ************************************ 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.158 ************************************ 00:13:14.158 START TEST nvmf_target_discovery 00:13:14.158 ************************************ 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:14.158 * Looking for test storage... 00:13:14.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.158 --rc genhtml_branch_coverage=1 00:13:14.158 --rc genhtml_function_coverage=1 00:13:14.158 --rc genhtml_legend=1 00:13:14.158 --rc geninfo_all_blocks=1 00:13:14.158 --rc geninfo_unexecuted_blocks=1 00:13:14.158 00:13:14.158 ' 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.158 --rc genhtml_branch_coverage=1 00:13:14.158 --rc genhtml_function_coverage=1 00:13:14.158 --rc genhtml_legend=1 00:13:14.158 --rc geninfo_all_blocks=1 00:13:14.158 --rc geninfo_unexecuted_blocks=1 00:13:14.158 00:13:14.158 ' 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.158 --rc genhtml_branch_coverage=1 00:13:14.158 --rc genhtml_function_coverage=1 00:13:14.158 --rc genhtml_legend=1 00:13:14.158 --rc geninfo_all_blocks=1 00:13:14.158 --rc geninfo_unexecuted_blocks=1 00:13:14.158 00:13:14.158 ' 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.158 --rc genhtml_branch_coverage=1 00:13:14.158 --rc genhtml_function_coverage=1 00:13:14.158 --rc genhtml_legend=1 00:13:14.158 --rc geninfo_all_blocks=1 00:13:14.158 --rc geninfo_unexecuted_blocks=1 00:13:14.158 00:13:14.158 ' 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.158 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.159 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:19.433 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.433 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:19.434 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:19.434 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:19.434 Found net devices under 0000:31:00.0: cvl_0_0 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:19.434 Found net devices under 0000:31:00.1: cvl_0_1 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:19.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:13:19.434 00:13:19.434 --- 10.0.0.2 ping statistics --- 00:13:19.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.434 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:13:19.434 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:13:19.435 00:13:19.435 --- 10.0.0.1 ping statistics --- 00:13:19.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.435 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3793406 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3793406 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3793406 ']' 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:19.435 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:19.435 [2024-11-20 14:33:26.476877] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:13:19.435 [2024-11-20 14:33:26.476927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.695 [2024-11-20 14:33:26.565266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.695 [2024-11-20 14:33:26.601123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.695 [2024-11-20 14:33:26.601157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.695 [2024-11-20 14:33:26.601164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.695 [2024-11-20 14:33:26.601171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.695 [2024-11-20 14:33:26.601177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.695 [2024-11-20 14:33:26.602898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.695 [2024-11-20 14:33:26.602917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.695 [2024-11-20 14:33:26.603049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.695 [2024-11-20 14:33:26.603050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.264 [2024-11-20 14:33:27.281965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.264 Null1 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.264 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.585 [2024-11-20 14:33:27.339568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 Null2 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 Null3 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 Null4 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:13:20.586 00:13:20.586 Discovery Log Number of Records 6, Generation counter 6 00:13:20.586 =====Discovery Log Entry 0====== 00:13:20.586 trtype: tcp 00:13:20.586 adrfam: ipv4 00:13:20.586 subtype: current discovery subsystem 00:13:20.586 treq: not required 00:13:20.586 portid: 0 00:13:20.586 trsvcid: 4420 00:13:20.586 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:20.586 traddr: 10.0.0.2 00:13:20.586 eflags: explicit discovery connections, duplicate discovery information 00:13:20.586 sectype: none 00:13:20.586 =====Discovery Log Entry 1====== 00:13:20.586 trtype: tcp 00:13:20.586 adrfam: ipv4 00:13:20.586 subtype: nvme subsystem 00:13:20.586 treq: not required 00:13:20.586 portid: 0 00:13:20.586 trsvcid: 4420 00:13:20.586 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:20.586 traddr: 10.0.0.2 00:13:20.586 eflags: none 00:13:20.586 sectype: none 00:13:20.586 =====Discovery Log Entry 2====== 00:13:20.586 trtype: tcp 00:13:20.586 adrfam: ipv4 00:13:20.586 subtype: nvme subsystem 00:13:20.586 treq: not required 00:13:20.586 portid: 0 00:13:20.586 trsvcid: 4420 00:13:20.586 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:20.586 traddr: 10.0.0.2 00:13:20.586 eflags: none 00:13:20.586 sectype: none 00:13:20.586 =====Discovery Log Entry 3====== 00:13:20.586 trtype: tcp 00:13:20.586 adrfam: ipv4 00:13:20.586 subtype: nvme subsystem 00:13:20.586 treq: not required 00:13:20.586 portid: 0 00:13:20.586 trsvcid: 4420 00:13:20.586 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:20.586 traddr: 10.0.0.2 00:13:20.586 eflags: none 00:13:20.586 sectype: none 00:13:20.586 =====Discovery Log Entry 4====== 00:13:20.586 trtype: tcp 00:13:20.586 adrfam: ipv4 00:13:20.586 subtype: nvme subsystem 00:13:20.586 treq: not required 00:13:20.586 portid: 0 00:13:20.586 trsvcid: 4420 00:13:20.586 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:20.586 traddr: 10.0.0.2 00:13:20.586 eflags: none 00:13:20.586 sectype: none 00:13:20.586 =====Discovery Log Entry 5====== 00:13:20.586 trtype: tcp 00:13:20.586 adrfam: ipv4 00:13:20.586 subtype: discovery subsystem referral 00:13:20.586 treq: not required 00:13:20.586 portid: 0 00:13:20.586 trsvcid: 4430 00:13:20.586 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:20.586 traddr: 10.0.0.2 00:13:20.586 eflags: none 00:13:20.586 sectype: none 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:20.586 Perform nvmf subsystem discovery via RPC 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.586 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.586 [ 00:13:20.586 { 00:13:20.586 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:20.586 "subtype": "Discovery", 00:13:20.586 "listen_addresses": [ 00:13:20.586 { 00:13:20.586 "trtype": "TCP", 00:13:20.586 "adrfam": "IPv4", 00:13:20.586 "traddr": "10.0.0.2", 00:13:20.586 "trsvcid": "4420" 00:13:20.586 } 00:13:20.586 ], 00:13:20.586 "allow_any_host": true, 00:13:20.586 "hosts": [] 00:13:20.586 }, 00:13:20.586 { 00:13:20.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.587 "subtype": "NVMe", 00:13:20.587 "listen_addresses": [ 00:13:20.587 { 00:13:20.587 "trtype": "TCP", 00:13:20.587 "adrfam": "IPv4", 00:13:20.587 "traddr": "10.0.0.2", 00:13:20.587 "trsvcid": "4420" 00:13:20.587 } 00:13:20.587 ], 00:13:20.587 "allow_any_host": true, 00:13:20.587 "hosts": [], 00:13:20.587 "serial_number": "SPDK00000000000001", 00:13:20.587 "model_number": "SPDK bdev Controller", 00:13:20.587 "max_namespaces": 32, 00:13:20.587 "min_cntlid": 1, 00:13:20.587 "max_cntlid": 65519, 00:13:20.587 "namespaces": [ 00:13:20.587 { 00:13:20.587 "nsid": 1, 00:13:20.587 "bdev_name": "Null1", 00:13:20.587 "name": "Null1", 00:13:20.587 "nguid": "C6F468EA58FA4E84BB52CA77AF6ED820", 00:13:20.587 "uuid": "c6f468ea-58fa-4e84-bb52-ca77af6ed820" 00:13:20.587 } 00:13:20.587 ] 00:13:20.587 }, 00:13:20.587 { 00:13:20.587 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:20.587 "subtype": "NVMe", 00:13:20.587 "listen_addresses": [ 00:13:20.587 { 00:13:20.587 "trtype": "TCP", 00:13:20.587 "adrfam": "IPv4", 00:13:20.587 "traddr": "10.0.0.2", 00:13:20.587 "trsvcid": "4420" 00:13:20.587 } 00:13:20.587 ], 00:13:20.587 "allow_any_host": true, 00:13:20.587 "hosts": [], 00:13:20.587 "serial_number": "SPDK00000000000002", 00:13:20.587 "model_number": "SPDK bdev Controller", 00:13:20.587 "max_namespaces": 32, 00:13:20.587 "min_cntlid": 1, 00:13:20.587 "max_cntlid": 65519, 00:13:20.587 "namespaces": [ 00:13:20.587 { 00:13:20.587 "nsid": 1, 00:13:20.587 "bdev_name": "Null2", 00:13:20.587 "name": "Null2", 00:13:20.587 "nguid": "2B50311D14A14927AD25943421A4FA0F", 00:13:20.587 "uuid": "2b50311d-14a1-4927-ad25-943421a4fa0f" 00:13:20.587 } 00:13:20.587 ] 00:13:20.587 }, 00:13:20.587 { 00:13:20.587 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:20.587 "subtype": "NVMe", 00:13:20.587 "listen_addresses": [ 00:13:20.587 { 00:13:20.587 "trtype": "TCP", 00:13:20.587 "adrfam": "IPv4", 00:13:20.587 "traddr": "10.0.0.2", 00:13:20.587 "trsvcid": "4420" 00:13:20.587 } 00:13:20.587 ], 00:13:20.587 "allow_any_host": true, 00:13:20.587 "hosts": [], 00:13:20.587 "serial_number": "SPDK00000000000003", 00:13:20.587 "model_number": "SPDK bdev Controller", 00:13:20.587 "max_namespaces": 32, 00:13:20.587 "min_cntlid": 1, 00:13:20.587 "max_cntlid": 65519, 00:13:20.587 "namespaces": [ 00:13:20.587 { 00:13:20.587 "nsid": 1, 00:13:20.587 "bdev_name": "Null3", 00:13:20.587 "name": "Null3", 00:13:20.587 "nguid": "165A88CAE6EB4A0E810717CF07A9A740", 00:13:20.587 "uuid": "165a88ca-e6eb-4a0e-8107-17cf07a9a740" 00:13:20.587 } 00:13:20.587 ] 00:13:20.587 }, 00:13:20.587 { 00:13:20.587 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:20.587 "subtype": "NVMe", 00:13:20.587 "listen_addresses": [ 00:13:20.587 { 00:13:20.587 "trtype": "TCP", 00:13:20.587 "adrfam": "IPv4", 00:13:20.587 "traddr": "10.0.0.2", 00:13:20.587 "trsvcid": "4420" 00:13:20.587 } 00:13:20.587 ], 00:13:20.587 "allow_any_host": true, 00:13:20.587 "hosts": [], 00:13:20.587 "serial_number": "SPDK00000000000004", 00:13:20.587 "model_number": "SPDK bdev Controller", 00:13:20.587 "max_namespaces": 32, 00:13:20.587 "min_cntlid": 1, 00:13:20.587 "max_cntlid": 65519, 00:13:20.587 "namespaces": [ 00:13:20.587 { 00:13:20.587 "nsid": 1, 00:13:20.587 "bdev_name": "Null4", 00:13:20.587 "name": "Null4", 00:13:20.587 "nguid": "88D16A7DFCE844F8B6EC727F96B6A50C", 00:13:20.587 "uuid": "88d16a7d-fce8-44f8-b6ec-727f96b6a50c" 00:13:20.587 } 00:13:20.587 ] 00:13:20.587 } 00:13:20.587 ] 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.587 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:20.912 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.913 rmmod nvme_tcp 00:13:20.913 rmmod nvme_fabrics 00:13:20.913 rmmod nvme_keyring 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3793406 ']' 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3793406 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3793406 ']' 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3793406 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3793406 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3793406' 00:13:20.913 killing process with pid 3793406 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3793406 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3793406 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.913 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.479 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:23.479 00:13:23.479 real 0m9.254s 00:13:23.479 user 0m6.756s 00:13:23.479 sys 0m4.600s 00:13:23.479 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.479 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.479 ************************************ 00:13:23.479 END TEST nvmf_target_discovery 00:13:23.479 ************************************ 00:13:23.479 14:33:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:23.479 14:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:23.479 14:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.479 14:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.479 ************************************ 00:13:23.479 START TEST nvmf_referrals 00:13:23.479 ************************************ 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:23.479 * Looking for test storage... 00:13:23.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:23.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.479 --rc genhtml_branch_coverage=1 00:13:23.479 --rc genhtml_function_coverage=1 00:13:23.479 --rc genhtml_legend=1 00:13:23.479 --rc geninfo_all_blocks=1 00:13:23.479 --rc geninfo_unexecuted_blocks=1 00:13:23.479 00:13:23.479 ' 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:23.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.479 --rc genhtml_branch_coverage=1 00:13:23.479 --rc genhtml_function_coverage=1 00:13:23.479 --rc genhtml_legend=1 00:13:23.479 --rc geninfo_all_blocks=1 00:13:23.479 --rc geninfo_unexecuted_blocks=1 00:13:23.479 00:13:23.479 ' 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:23.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.479 --rc genhtml_branch_coverage=1 00:13:23.479 --rc genhtml_function_coverage=1 00:13:23.479 --rc genhtml_legend=1 00:13:23.479 --rc geninfo_all_blocks=1 00:13:23.479 --rc geninfo_unexecuted_blocks=1 00:13:23.479 00:13:23.479 ' 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:23.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.479 --rc genhtml_branch_coverage=1 00:13:23.479 --rc genhtml_function_coverage=1 00:13:23.479 --rc genhtml_legend=1 00:13:23.479 --rc geninfo_all_blocks=1 00:13:23.479 --rc geninfo_unexecuted_blocks=1 00:13:23.479 00:13:23.479 ' 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.479 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:23.480 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:28.760 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:28.761 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:28.761 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:28.761 Found net devices under 0000:31:00.0: cvl_0_0 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:28.761 Found net devices under 0000:31:00.1: cvl_0_1 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:28.761 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:13:29.021 00:13:29.021 --- 10.0.0.2 ping statistics --- 00:13:29.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.021 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:13:29.021 00:13:29.021 --- 10.0.0.1 ping statistics --- 00:13:29.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.021 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.021 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3798112 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3798112 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3798112 ']' 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.022 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.022 [2024-11-20 14:33:35.974021] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:13:29.022 [2024-11-20 14:33:35.974088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.022 [2024-11-20 14:33:36.068622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.281 [2024-11-20 14:33:36.121761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.281 [2024-11-20 14:33:36.121819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.281 [2024-11-20 14:33:36.121829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.281 [2024-11-20 14:33:36.121836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.281 [2024-11-20 14:33:36.121843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.281 [2024-11-20 14:33:36.124330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.281 [2024-11-20 14:33:36.124531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.281 [2024-11-20 14:33:36.124532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.281 [2024-11-20 14:33:36.124387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 [2024-11-20 14:33:36.791826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 [2024-11-20 14:33:36.812588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.849 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:30.109 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:30.370 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:30.629 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.888 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.147 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:31.408 rmmod nvme_tcp 00:13:31.408 rmmod nvme_fabrics 00:13:31.408 rmmod nvme_keyring 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3798112 ']' 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3798112 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3798112 ']' 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3798112 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.408 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3798112 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3798112' 00:13:31.668 killing process with pid 3798112 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3798112 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3798112 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.668 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:34.207 00:13:34.207 real 0m10.655s 00:13:34.207 user 0m11.955s 00:13:34.207 sys 0m4.957s 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:34.207 ************************************ 00:13:34.207 END TEST nvmf_referrals 00:13:34.207 ************************************ 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:34.207 ************************************ 00:13:34.207 START TEST nvmf_connect_disconnect 00:13:34.207 ************************************ 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:34.207 * Looking for test storage... 00:13:34.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.207 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.208 --rc genhtml_branch_coverage=1 00:13:34.208 --rc genhtml_function_coverage=1 00:13:34.208 --rc genhtml_legend=1 00:13:34.208 --rc geninfo_all_blocks=1 00:13:34.208 --rc geninfo_unexecuted_blocks=1 00:13:34.208 00:13:34.208 ' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.208 --rc genhtml_branch_coverage=1 00:13:34.208 --rc genhtml_function_coverage=1 00:13:34.208 --rc genhtml_legend=1 00:13:34.208 --rc geninfo_all_blocks=1 00:13:34.208 --rc geninfo_unexecuted_blocks=1 00:13:34.208 00:13:34.208 ' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.208 --rc genhtml_branch_coverage=1 00:13:34.208 --rc genhtml_function_coverage=1 00:13:34.208 --rc genhtml_legend=1 00:13:34.208 --rc geninfo_all_blocks=1 00:13:34.208 --rc geninfo_unexecuted_blocks=1 00:13:34.208 00:13:34.208 ' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:34.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.208 --rc genhtml_branch_coverage=1 00:13:34.208 --rc genhtml_function_coverage=1 00:13:34.208 --rc genhtml_legend=1 00:13:34.208 --rc geninfo_all_blocks=1 00:13:34.208 --rc geninfo_unexecuted_blocks=1 00:13:34.208 00:13:34.208 ' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:34.208 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:39.483 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:39.483 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:39.483 Found net devices under 0000:31:00.0: cvl_0_0 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:39.483 Found net devices under 0000:31:00.1: cvl_0_1 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:39.483 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:39.483 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:39.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:13:39.484 00:13:39.484 --- 10.0.0.2 ping statistics --- 00:13:39.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.484 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:13:39.484 00:13:39.484 --- 10.0.0.1 ping statistics --- 00:13:39.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.484 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3803216 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3803216 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3803216 ']' 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:39.484 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:39.484 [2024-11-20 14:33:46.312863] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:13:39.484 [2024-11-20 14:33:46.312932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.484 [2024-11-20 14:33:46.402846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.484 [2024-11-20 14:33:46.456433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.484 [2024-11-20 14:33:46.456486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.484 [2024-11-20 14:33:46.456494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.484 [2024-11-20 14:33:46.456502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.484 [2024-11-20 14:33:46.456508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.484 [2024-11-20 14:33:46.458922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.484 [2024-11-20 14:33:46.459064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.484 [2024-11-20 14:33:46.459203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.484 [2024-11-20 14:33:46.459204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.422 [2024-11-20 14:33:47.154876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.422 [2024-11-20 14:33:47.217233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:40.422 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:43.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.678 rmmod nvme_tcp 00:13:58.678 rmmod nvme_fabrics 00:13:58.678 rmmod nvme_keyring 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3803216 ']' 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3803216 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3803216 ']' 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3803216 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3803216 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3803216' 00:13:58.678 killing process with pid 3803216 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3803216 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3803216 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:00.585 00:14:00.585 real 0m26.672s 00:14:00.585 user 1m16.498s 00:14:00.585 sys 0m5.166s 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.585 ************************************ 00:14:00.585 END TEST nvmf_connect_disconnect 00:14:00.585 ************************************ 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.585 ************************************ 00:14:00.585 START TEST nvmf_multitarget 00:14:00.585 ************************************ 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:00.585 * Looking for test storage... 00:14:00.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.585 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:00.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.586 --rc genhtml_branch_coverage=1 00:14:00.586 --rc genhtml_function_coverage=1 00:14:00.586 --rc genhtml_legend=1 00:14:00.586 --rc geninfo_all_blocks=1 00:14:00.586 --rc geninfo_unexecuted_blocks=1 00:14:00.586 00:14:00.586 ' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:00.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.586 --rc genhtml_branch_coverage=1 00:14:00.586 --rc genhtml_function_coverage=1 00:14:00.586 --rc genhtml_legend=1 00:14:00.586 --rc geninfo_all_blocks=1 00:14:00.586 --rc geninfo_unexecuted_blocks=1 00:14:00.586 00:14:00.586 ' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:00.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.586 --rc genhtml_branch_coverage=1 00:14:00.586 --rc genhtml_function_coverage=1 00:14:00.586 --rc genhtml_legend=1 00:14:00.586 --rc geninfo_all_blocks=1 00:14:00.586 --rc geninfo_unexecuted_blocks=1 00:14:00.586 00:14:00.586 ' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:00.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.586 --rc genhtml_branch_coverage=1 00:14:00.586 --rc genhtml_function_coverage=1 00:14:00.586 --rc genhtml_legend=1 00:14:00.586 --rc geninfo_all_blocks=1 00:14:00.586 --rc geninfo_unexecuted_blocks=1 00:14:00.586 00:14:00.586 ' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.586 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.587 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:00.587 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:00.587 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:00.587 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:05.868 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.868 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:05.868 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:05.868 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:05.869 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:05.869 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:05.869 Found net devices under 0000:31:00.0: cvl_0_0 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:05.869 Found net devices under 0000:31:00.1: cvl_0_1 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.869 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:06.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:14:06.129 00:14:06.129 --- 10.0.0.2 ping statistics --- 00:14:06.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.129 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:14:06.129 00:14:06.129 --- 10.0.0.1 ping statistics --- 00:14:06.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.129 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3811710 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3811710 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3811710 ']' 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:06.129 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.388 [2024-11-20 14:34:13.215494] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:14:06.388 [2024-11-20 14:34:13.215545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.388 [2024-11-20 14:34:13.301242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.388 [2024-11-20 14:34:13.337179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.388 [2024-11-20 14:34:13.337214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.388 [2024-11-20 14:34:13.337222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.388 [2024-11-20 14:34:13.337229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.388 [2024-11-20 14:34:13.337235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.389 [2024-11-20 14:34:13.338747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.389 [2024-11-20 14:34:13.338900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.389 [2024-11-20 14:34:13.339054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.389 [2024-11-20 14:34:13.339055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.956 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.956 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:06.956 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:06.956 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:06.956 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:07.215 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.215 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:07.215 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:07.215 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:07.215 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:07.215 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:07.215 "nvmf_tgt_1" 00:14:07.215 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:07.215 "nvmf_tgt_2" 00:14:07.215 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:07.215 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:07.475 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:07.475 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:07.475 true 00:14:07.475 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:07.475 true 00:14:07.475 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:07.475 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.735 rmmod nvme_tcp 00:14:07.735 rmmod nvme_fabrics 00:14:07.735 rmmod nvme_keyring 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3811710 ']' 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3811710 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3811710 ']' 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3811710 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3811710 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3811710' 00:14:07.735 killing process with pid 3811710 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3811710 00:14:07.735 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3811710 00:14:07.994 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:07.994 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.995 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.906 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:09.906 00:14:09.906 real 0m9.455s 00:14:09.906 user 0m8.004s 00:14:09.906 sys 0m4.656s 00:14:09.906 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.906 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:09.906 ************************************ 00:14:09.906 END TEST nvmf_multitarget 00:14:09.906 ************************************ 00:14:09.906 14:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:09.906 14:34:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.906 14:34:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.906 14:34:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.906 ************************************ 00:14:09.906 START TEST nvmf_rpc 00:14:09.906 ************************************ 00:14:09.906 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:10.166 * Looking for test storage... 00:14:10.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.166 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:10.166 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:10.166 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:10.166 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:10.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.167 --rc genhtml_branch_coverage=1 00:14:10.167 --rc genhtml_function_coverage=1 00:14:10.167 --rc genhtml_legend=1 00:14:10.167 --rc geninfo_all_blocks=1 00:14:10.167 --rc geninfo_unexecuted_blocks=1 00:14:10.167 00:14:10.167 ' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:10.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.167 --rc genhtml_branch_coverage=1 00:14:10.167 --rc genhtml_function_coverage=1 00:14:10.167 --rc genhtml_legend=1 00:14:10.167 --rc geninfo_all_blocks=1 00:14:10.167 --rc geninfo_unexecuted_blocks=1 00:14:10.167 00:14:10.167 ' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:10.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.167 --rc genhtml_branch_coverage=1 00:14:10.167 --rc genhtml_function_coverage=1 00:14:10.167 --rc genhtml_legend=1 00:14:10.167 --rc geninfo_all_blocks=1 00:14:10.167 --rc geninfo_unexecuted_blocks=1 00:14:10.167 00:14:10.167 ' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:10.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.167 --rc genhtml_branch_coverage=1 00:14:10.167 --rc genhtml_function_coverage=1 00:14:10.167 --rc genhtml_legend=1 00:14:10.167 --rc geninfo_all_blocks=1 00:14:10.167 --rc geninfo_unexecuted_blocks=1 00:14:10.167 00:14:10.167 ' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:10.167 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:15.446 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:15.446 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:15.447 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:15.447 Found net devices under 0000:31:00.0: cvl_0_0 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:15.447 Found net devices under 0000:31:00.1: cvl_0_1 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:15.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:14:15.447 00:14:15.447 --- 10.0.0.2 ping statistics --- 00:14:15.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.447 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:14:15.447 00:14:15.447 --- 10.0.0.1 ping statistics --- 00:14:15.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.447 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:15.447 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3816561 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3816561 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3816561 ']' 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.448 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:15.448 [2024-11-20 14:34:22.435763] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:14:15.448 [2024-11-20 14:34:22.435815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.707 [2024-11-20 14:34:22.520530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.707 [2024-11-20 14:34:22.563705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.707 [2024-11-20 14:34:22.563747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.707 [2024-11-20 14:34:22.563755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.707 [2024-11-20 14:34:22.563762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.707 [2024-11-20 14:34:22.563768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.707 [2024-11-20 14:34:22.565864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.707 [2024-11-20 14:34:22.566023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.707 [2024-11-20 14:34:22.566163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.707 [2024-11-20 14:34:22.566165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:16.276 "tick_rate": 2400000000, 00:14:16.276 "poll_groups": [ 00:14:16.276 { 00:14:16.276 "name": "nvmf_tgt_poll_group_000", 00:14:16.276 "admin_qpairs": 0, 00:14:16.276 "io_qpairs": 0, 00:14:16.276 "current_admin_qpairs": 0, 00:14:16.276 "current_io_qpairs": 0, 00:14:16.276 "pending_bdev_io": 0, 00:14:16.276 "completed_nvme_io": 0, 00:14:16.276 "transports": [] 00:14:16.276 }, 00:14:16.276 { 00:14:16.276 "name": "nvmf_tgt_poll_group_001", 00:14:16.276 "admin_qpairs": 0, 00:14:16.276 "io_qpairs": 0, 00:14:16.276 "current_admin_qpairs": 0, 00:14:16.276 "current_io_qpairs": 0, 00:14:16.276 "pending_bdev_io": 0, 00:14:16.276 "completed_nvme_io": 0, 00:14:16.276 "transports": [] 00:14:16.276 }, 00:14:16.276 { 00:14:16.276 "name": "nvmf_tgt_poll_group_002", 00:14:16.276 "admin_qpairs": 0, 00:14:16.276 "io_qpairs": 0, 00:14:16.276 "current_admin_qpairs": 0, 00:14:16.276 "current_io_qpairs": 0, 00:14:16.276 "pending_bdev_io": 0, 00:14:16.276 "completed_nvme_io": 0, 00:14:16.276 "transports": [] 00:14:16.276 }, 00:14:16.276 { 00:14:16.276 "name": "nvmf_tgt_poll_group_003", 00:14:16.276 "admin_qpairs": 0, 00:14:16.276 "io_qpairs": 0, 00:14:16.276 "current_admin_qpairs": 0, 00:14:16.276 "current_io_qpairs": 0, 00:14:16.276 "pending_bdev_io": 0, 00:14:16.276 "completed_nvme_io": 0, 00:14:16.276 "transports": [] 00:14:16.276 } 00:14:16.276 ] 00:14:16.276 }' 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.276 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.276 [2024-11-20 14:34:23.331279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.536 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.536 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:16.537 "tick_rate": 2400000000, 00:14:16.537 "poll_groups": [ 00:14:16.537 { 00:14:16.537 "name": "nvmf_tgt_poll_group_000", 00:14:16.537 "admin_qpairs": 0, 00:14:16.537 "io_qpairs": 0, 00:14:16.537 "current_admin_qpairs": 0, 00:14:16.537 "current_io_qpairs": 0, 00:14:16.537 "pending_bdev_io": 0, 00:14:16.537 "completed_nvme_io": 0, 00:14:16.537 "transports": [ 00:14:16.537 { 00:14:16.537 "trtype": "TCP" 00:14:16.537 } 00:14:16.537 ] 00:14:16.537 }, 00:14:16.537 { 00:14:16.537 "name": "nvmf_tgt_poll_group_001", 00:14:16.537 "admin_qpairs": 0, 00:14:16.537 "io_qpairs": 0, 00:14:16.537 "current_admin_qpairs": 0, 00:14:16.537 "current_io_qpairs": 0, 00:14:16.537 "pending_bdev_io": 0, 00:14:16.537 "completed_nvme_io": 0, 00:14:16.537 "transports": [ 00:14:16.537 { 00:14:16.537 "trtype": "TCP" 00:14:16.537 } 00:14:16.537 ] 00:14:16.537 }, 00:14:16.537 { 00:14:16.537 "name": "nvmf_tgt_poll_group_002", 00:14:16.537 "admin_qpairs": 0, 00:14:16.537 "io_qpairs": 0, 00:14:16.537 "current_admin_qpairs": 0, 00:14:16.537 "current_io_qpairs": 0, 00:14:16.537 "pending_bdev_io": 0, 00:14:16.537 "completed_nvme_io": 0, 00:14:16.537 "transports": [ 00:14:16.537 { 00:14:16.537 "trtype": "TCP" 00:14:16.537 } 00:14:16.537 ] 00:14:16.537 }, 00:14:16.537 { 00:14:16.537 "name": "nvmf_tgt_poll_group_003", 00:14:16.537 "admin_qpairs": 0, 00:14:16.537 "io_qpairs": 0, 00:14:16.537 "current_admin_qpairs": 0, 00:14:16.537 "current_io_qpairs": 0, 00:14:16.537 "pending_bdev_io": 0, 00:14:16.537 "completed_nvme_io": 0, 00:14:16.537 "transports": [ 00:14:16.537 { 00:14:16.537 "trtype": "TCP" 00:14:16.537 } 00:14:16.537 ] 00:14:16.537 } 00:14:16.537 ] 00:14:16.537 }' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.537 Malloc1 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.537 [2024-11-20 14:34:23.468309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:14:16.537 [2024-11-20 14:34:23.490963] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:14:16.537 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:16.537 could not add new controller: failed to write to nvme-fabrics device 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.537 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.538 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:17.939 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:17.939 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:17.939 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.939 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:17.939 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:20.476 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:20.476 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:20.476 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.476 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:20.476 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.476 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:20.476 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.476 [2024-11-20 14:34:27.053974] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:14:20.476 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:20.476 could not add new controller: failed to write to nvme-fabrics device 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.476 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.853 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.853 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:21.853 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.853 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:21.853 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.823 [2024-11-20 14:34:30.677111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.823 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.224 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:25.224 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:25.224 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.224 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:25.224 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:27.126 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:27.126 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:27.126 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.126 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:27.126 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.126 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:27.126 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.385 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.386 [2024-11-20 14:34:34.269478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.386 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.765 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.765 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:28.765 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.765 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:28.765 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:30.668 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:30.668 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:30.668 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 [2024-11-20 14:34:37.883545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.928 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:32.843 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.843 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:32.843 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.843 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:32.843 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.744 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.745 [2024-11-20 14:34:41.521385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.745 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:36.124 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.124 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:36.124 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.124 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:36.124 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:38.026 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:38.026 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:38.026 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.026 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:38.026 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.026 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:38.026 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:38.286 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.287 [2024-11-20 14:34:45.209519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.287 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:40.194 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:40.194 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:40.194 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.194 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:40.194 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.102 [2024-11-20 14:34:48.978308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.102 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.102 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.102 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 [2024-11-20 14:34:49.026400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 [2024-11-20 14:34:49.074515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 [2024-11-20 14:34:49.122650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.103 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.364 [2024-11-20 14:34:49.170801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:42.364 "tick_rate": 2400000000, 00:14:42.364 "poll_groups": [ 00:14:42.364 { 00:14:42.364 "name": "nvmf_tgt_poll_group_000", 00:14:42.364 "admin_qpairs": 0, 00:14:42.364 "io_qpairs": 224, 00:14:42.364 "current_admin_qpairs": 0, 00:14:42.364 "current_io_qpairs": 0, 00:14:42.364 "pending_bdev_io": 0, 00:14:42.364 "completed_nvme_io": 388, 00:14:42.364 "transports": [ 00:14:42.364 { 00:14:42.364 "trtype": "TCP" 00:14:42.364 } 00:14:42.364 ] 00:14:42.364 }, 00:14:42.364 { 00:14:42.364 "name": "nvmf_tgt_poll_group_001", 00:14:42.364 "admin_qpairs": 1, 00:14:42.364 "io_qpairs": 223, 00:14:42.364 "current_admin_qpairs": 0, 00:14:42.364 "current_io_qpairs": 0, 00:14:42.364 "pending_bdev_io": 0, 00:14:42.364 "completed_nvme_io": 404, 00:14:42.364 "transports": [ 00:14:42.364 { 00:14:42.364 "trtype": "TCP" 00:14:42.364 } 00:14:42.364 ] 00:14:42.364 }, 00:14:42.364 { 00:14:42.364 "name": "nvmf_tgt_poll_group_002", 00:14:42.364 "admin_qpairs": 6, 00:14:42.364 "io_qpairs": 218, 00:14:42.364 "current_admin_qpairs": 0, 00:14:42.364 "current_io_qpairs": 0, 00:14:42.364 "pending_bdev_io": 0, 00:14:42.364 "completed_nvme_io": 218, 00:14:42.364 "transports": [ 00:14:42.364 { 00:14:42.364 "trtype": "TCP" 00:14:42.364 } 00:14:42.364 ] 00:14:42.364 }, 00:14:42.364 { 00:14:42.364 "name": "nvmf_tgt_poll_group_003", 00:14:42.364 "admin_qpairs": 0, 00:14:42.364 "io_qpairs": 224, 00:14:42.364 "current_admin_qpairs": 0, 00:14:42.364 "current_io_qpairs": 0, 00:14:42.364 "pending_bdev_io": 0, 00:14:42.364 "completed_nvme_io": 229, 00:14:42.364 "transports": [ 00:14:42.364 { 00:14:42.364 "trtype": "TCP" 00:14:42.364 } 00:14:42.364 ] 00:14:42.364 } 00:14:42.364 ] 00:14:42.364 }' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.364 rmmod nvme_tcp 00:14:42.364 rmmod nvme_fabrics 00:14:42.364 rmmod nvme_keyring 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3816561 ']' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3816561 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3816561 ']' 00:14:42.364 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3816561 00:14:42.365 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:42.365 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.365 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3816561 00:14:42.365 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.365 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.365 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3816561' 00:14:42.365 killing process with pid 3816561 00:14:42.365 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3816561 00:14:42.365 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3816561 00:14:42.624 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:42.624 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:42.624 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:42.624 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:42.624 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:42.624 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:42.624 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:42.624 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:42.625 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:42.625 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.625 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.625 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.532 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:44.532 00:14:44.532 real 0m34.630s 00:14:44.532 user 1m48.710s 00:14:44.532 sys 0m5.514s 00:14:44.532 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.532 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.532 ************************************ 00:14:44.532 END TEST nvmf_rpc 00:14:44.532 ************************************ 00:14:44.532 14:34:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:44.532 14:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:44.532 14:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.532 14:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.793 ************************************ 00:14:44.793 START TEST nvmf_invalid 00:14:44.793 ************************************ 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:44.793 * Looking for test storage... 00:14:44.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:44.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.793 --rc genhtml_branch_coverage=1 00:14:44.793 --rc genhtml_function_coverage=1 00:14:44.793 --rc genhtml_legend=1 00:14:44.793 --rc geninfo_all_blocks=1 00:14:44.793 --rc geninfo_unexecuted_blocks=1 00:14:44.793 00:14:44.793 ' 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:44.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.793 --rc genhtml_branch_coverage=1 00:14:44.793 --rc genhtml_function_coverage=1 00:14:44.793 --rc genhtml_legend=1 00:14:44.793 --rc geninfo_all_blocks=1 00:14:44.793 --rc geninfo_unexecuted_blocks=1 00:14:44.793 00:14:44.793 ' 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:44.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.793 --rc genhtml_branch_coverage=1 00:14:44.793 --rc genhtml_function_coverage=1 00:14:44.793 --rc genhtml_legend=1 00:14:44.793 --rc geninfo_all_blocks=1 00:14:44.793 --rc geninfo_unexecuted_blocks=1 00:14:44.793 00:14:44.793 ' 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:44.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.793 --rc genhtml_branch_coverage=1 00:14:44.793 --rc genhtml_function_coverage=1 00:14:44.793 --rc genhtml_legend=1 00:14:44.793 --rc geninfo_all_blocks=1 00:14:44.793 --rc geninfo_unexecuted_blocks=1 00:14:44.793 00:14:44.793 ' 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.793 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:44.794 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:50.101 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:50.101 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:50.101 Found net devices under 0000:31:00.0: cvl_0_0 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:50.101 Found net devices under 0000:31:00.1: cvl_0_1 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.101 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.102 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:50.102 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:50.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:14:50.102 00:14:50.102 --- 10.0.0.2 ping statistics --- 00:14:50.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.102 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:14:50.102 00:14:50.102 --- 10.0.0.1 ping statistics --- 00:14:50.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.102 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3826924 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3826924 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3826924 ']' 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:50.102 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:50.102 [2024-11-20 14:34:57.153365] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:14:50.102 [2024-11-20 14:34:57.153419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.362 [2024-11-20 14:34:57.243354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.362 [2024-11-20 14:34:57.295951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.362 [2024-11-20 14:34:57.296002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.362 [2024-11-20 14:34:57.296011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.362 [2024-11-20 14:34:57.296018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.362 [2024-11-20 14:34:57.296024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.362 [2024-11-20 14:34:57.298294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.362 [2024-11-20 14:34:57.298373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.363 [2024-11-20 14:34:57.298531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.363 [2024-11-20 14:34:57.298532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.931 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.931 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:50.931 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:50.931 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:50.931 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:50.931 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.931 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:50.931 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32239 00:14:51.190 [2024-11-20 14:34:58.107517] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:51.190 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:51.190 { 00:14:51.190 "nqn": "nqn.2016-06.io.spdk:cnode32239", 00:14:51.190 "tgt_name": "foobar", 00:14:51.190 "method": "nvmf_create_subsystem", 00:14:51.190 "req_id": 1 00:14:51.190 } 00:14:51.190 Got JSON-RPC error response 00:14:51.190 response: 00:14:51.190 { 00:14:51.190 "code": -32603, 00:14:51.190 "message": "Unable to find target foobar" 00:14:51.190 }' 00:14:51.190 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:51.190 { 00:14:51.190 "nqn": "nqn.2016-06.io.spdk:cnode32239", 00:14:51.190 "tgt_name": "foobar", 00:14:51.190 "method": "nvmf_create_subsystem", 00:14:51.190 "req_id": 1 00:14:51.190 } 00:14:51.190 Got JSON-RPC error response 00:14:51.190 response: 00:14:51.190 { 00:14:51.190 "code": -32603, 00:14:51.190 "message": "Unable to find target foobar" 00:14:51.190 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:51.190 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:51.190 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7011 00:14:51.449 [2024-11-20 14:34:58.272057] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7011: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:51.449 { 00:14:51.449 "nqn": "nqn.2016-06.io.spdk:cnode7011", 00:14:51.449 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:51.449 "method": "nvmf_create_subsystem", 00:14:51.449 "req_id": 1 00:14:51.449 } 00:14:51.449 Got JSON-RPC error response 00:14:51.449 response: 00:14:51.449 { 00:14:51.449 "code": -32602, 00:14:51.449 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:51.449 }' 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:51.449 { 00:14:51.449 "nqn": "nqn.2016-06.io.spdk:cnode7011", 00:14:51.449 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:51.449 "method": "nvmf_create_subsystem", 00:14:51.449 "req_id": 1 00:14:51.449 } 00:14:51.449 Got JSON-RPC error response 00:14:51.449 response: 00:14:51.449 { 00:14:51.449 "code": -32602, 00:14:51.449 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:51.449 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25542 00:14:51.449 [2024-11-20 14:34:58.436603] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25542: invalid model number 'SPDK_Controller' 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:51.449 { 00:14:51.449 "nqn": "nqn.2016-06.io.spdk:cnode25542", 00:14:51.449 "model_number": "SPDK_Controller\u001f", 00:14:51.449 "method": "nvmf_create_subsystem", 00:14:51.449 "req_id": 1 00:14:51.449 } 00:14:51.449 Got JSON-RPC error response 00:14:51.449 response: 00:14:51.449 { 00:14:51.449 "code": -32602, 00:14:51.449 "message": "Invalid MN SPDK_Controller\u001f" 00:14:51.449 }' 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:51.449 { 00:14:51.449 "nqn": "nqn.2016-06.io.spdk:cnode25542", 00:14:51.449 "model_number": "SPDK_Controller\u001f", 00:14:51.449 "method": "nvmf_create_subsystem", 00:14:51.449 "req_id": 1 00:14:51.449 } 00:14:51.449 Got JSON-RPC error response 00:14:51.449 response: 00:14:51.449 { 00:14:51.449 "code": -32602, 00:14:51.449 "message": "Invalid MN SPDK_Controller\u001f" 00:14:51.449 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.449 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.450 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:51.710 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ : == \- ]] 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ':1waH`ROD`w0s:ad$l]#8' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ':1waH`ROD`w0s:ad$l]#8' nqn.2016-06.io.spdk:cnode14159 00:14:51.711 [2024-11-20 14:34:58.705456] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14159: invalid serial number ':1waH`ROD`w0s:ad$l]#8' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:51.711 { 00:14:51.711 "nqn": "nqn.2016-06.io.spdk:cnode14159", 00:14:51.711 "serial_number": ":1waH`ROD`w0s:ad$l]#8", 00:14:51.711 "method": "nvmf_create_subsystem", 00:14:51.711 "req_id": 1 00:14:51.711 } 00:14:51.711 Got JSON-RPC error response 00:14:51.711 response: 00:14:51.711 { 00:14:51.711 "code": -32602, 00:14:51.711 "message": "Invalid SN :1waH`ROD`w0s:ad$l]#8" 00:14:51.711 }' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:51.711 { 00:14:51.711 "nqn": "nqn.2016-06.io.spdk:cnode14159", 00:14:51.711 "serial_number": ":1waH`ROD`w0s:ad$l]#8", 00:14:51.711 "method": "nvmf_create_subsystem", 00:14:51.711 "req_id": 1 00:14:51.711 } 00:14:51.711 Got JSON-RPC error response 00:14:51.711 response: 00:14:51.711 { 00:14:51.711 "code": -32602, 00:14:51.711 "message": "Invalid SN :1waH`ROD`w0s:ad$l]#8" 00:14:51.711 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:51.711 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.972 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:51.973 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\A~'\''oBbA#xCl+ '\''nbmB/EatoFO#n=&+9>xz^AuH{I' 00:14:51.974 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '\A~'\''oBbA#xCl+ '\''nbmB/EatoFO#n=&+9>xz^AuH{I' nqn.2016-06.io.spdk:cnode799 00:14:52.235 [2024-11-20 14:34:59.054584] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode799: invalid model number '\A~'oBbA#xCl+ 'nbmB/EatoFO#n=&+9>xz^AuH{I' 00:14:52.235 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:52.235 { 00:14:52.235 "nqn": "nqn.2016-06.io.spdk:cnode799", 00:14:52.235 "model_number": "\\A~'\''oBbA#xCl+ '\''nbmB/EatoFO#n=&+9>xz^AuH{I", 00:14:52.235 "method": "nvmf_create_subsystem", 00:14:52.235 "req_id": 1 00:14:52.235 } 00:14:52.235 Got JSON-RPC error response 00:14:52.235 response: 00:14:52.235 { 00:14:52.235 "code": -32602, 00:14:52.235 "message": "Invalid MN \\A~'\''oBbA#xCl+ '\''nbmB/EatoFO#n=&+9>xz^AuH{I" 00:14:52.235 }' 00:14:52.235 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:52.235 { 00:14:52.235 "nqn": "nqn.2016-06.io.spdk:cnode799", 00:14:52.235 "model_number": "\\A~'oBbA#xCl+ 'nbmB/EatoFO#n=&+9>xz^AuH{I", 00:14:52.235 "method": "nvmf_create_subsystem", 00:14:52.235 "req_id": 1 00:14:52.235 } 00:14:52.235 Got JSON-RPC error response 00:14:52.235 response: 00:14:52.235 { 00:14:52.235 "code": -32602, 00:14:52.235 "message": "Invalid MN \\A~'oBbA#xCl+ 'nbmB/EatoFO#n=&+9>xz^AuH{I" 00:14:52.235 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:52.235 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:52.235 [2024-11-20 14:34:59.215192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.235 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:52.495 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:52.495 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:52.495 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:52.495 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:52.495 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:52.495 [2024-11-20 14:34:59.544207] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:52.755 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:52.755 { 00:14:52.755 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:52.755 "listen_address": { 00:14:52.755 "trtype": "tcp", 00:14:52.755 "traddr": "", 00:14:52.755 "trsvcid": "4421" 00:14:52.755 }, 00:14:52.755 "method": "nvmf_subsystem_remove_listener", 00:14:52.755 "req_id": 1 00:14:52.755 } 00:14:52.755 Got JSON-RPC error response 00:14:52.755 response: 00:14:52.755 { 00:14:52.755 "code": -32602, 00:14:52.755 "message": "Invalid parameters" 00:14:52.755 }' 00:14:52.755 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:52.755 { 00:14:52.755 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:52.755 "listen_address": { 00:14:52.755 "trtype": "tcp", 00:14:52.755 "traddr": "", 00:14:52.755 "trsvcid": "4421" 00:14:52.755 }, 00:14:52.755 "method": "nvmf_subsystem_remove_listener", 00:14:52.755 "req_id": 1 00:14:52.755 } 00:14:52.755 Got JSON-RPC error response 00:14:52.755 response: 00:14:52.755 { 00:14:52.755 "code": -32602, 00:14:52.755 "message": "Invalid parameters" 00:14:52.755 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:52.755 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23350 -i 0 00:14:52.755 [2024-11-20 14:34:59.704675] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23350: invalid cntlid range [0-65519] 00:14:52.755 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:52.755 { 00:14:52.755 "nqn": "nqn.2016-06.io.spdk:cnode23350", 00:14:52.755 "min_cntlid": 0, 00:14:52.755 "method": "nvmf_create_subsystem", 00:14:52.755 "req_id": 1 00:14:52.755 } 00:14:52.755 Got JSON-RPC error response 00:14:52.755 response: 00:14:52.755 { 00:14:52.755 "code": -32602, 00:14:52.755 "message": "Invalid cntlid range [0-65519]" 00:14:52.755 }' 00:14:52.755 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:52.755 { 00:14:52.755 "nqn": "nqn.2016-06.io.spdk:cnode23350", 00:14:52.755 "min_cntlid": 0, 00:14:52.755 "method": "nvmf_create_subsystem", 00:14:52.755 "req_id": 1 00:14:52.755 } 00:14:52.755 Got JSON-RPC error response 00:14:52.755 response: 00:14:52.755 { 00:14:52.755 "code": -32602, 00:14:52.755 "message": "Invalid cntlid range [0-65519]" 00:14:52.755 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:52.755 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17164 -i 65520 00:14:53.015 [2024-11-20 14:34:59.869214] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17164: invalid cntlid range [65520-65519] 00:14:53.015 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:53.015 { 00:14:53.015 "nqn": "nqn.2016-06.io.spdk:cnode17164", 00:14:53.015 "min_cntlid": 65520, 00:14:53.015 "method": "nvmf_create_subsystem", 00:14:53.015 "req_id": 1 00:14:53.015 } 00:14:53.015 Got JSON-RPC error response 00:14:53.015 response: 00:14:53.015 { 00:14:53.015 "code": -32602, 00:14:53.015 "message": "Invalid cntlid range [65520-65519]" 00:14:53.015 }' 00:14:53.015 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:53.015 { 00:14:53.015 "nqn": "nqn.2016-06.io.spdk:cnode17164", 00:14:53.015 "min_cntlid": 65520, 00:14:53.015 "method": "nvmf_create_subsystem", 00:14:53.015 "req_id": 1 00:14:53.015 } 00:14:53.015 Got JSON-RPC error response 00:14:53.015 response: 00:14:53.015 { 00:14:53.015 "code": -32602, 00:14:53.015 "message": "Invalid cntlid range [65520-65519]" 00:14:53.015 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:53.015 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode63 -I 0 00:14:53.015 [2024-11-20 14:35:00.029705] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode63: invalid cntlid range [1-0] 00:14:53.015 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:53.015 { 00:14:53.015 "nqn": "nqn.2016-06.io.spdk:cnode63", 00:14:53.015 "max_cntlid": 0, 00:14:53.015 "method": "nvmf_create_subsystem", 00:14:53.015 "req_id": 1 00:14:53.015 } 00:14:53.015 Got JSON-RPC error response 00:14:53.015 response: 00:14:53.015 { 00:14:53.015 "code": -32602, 00:14:53.015 "message": "Invalid cntlid range [1-0]" 00:14:53.015 }' 00:14:53.015 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:53.015 { 00:14:53.015 "nqn": "nqn.2016-06.io.spdk:cnode63", 00:14:53.015 "max_cntlid": 0, 00:14:53.015 "method": "nvmf_create_subsystem", 00:14:53.015 "req_id": 1 00:14:53.015 } 00:14:53.015 Got JSON-RPC error response 00:14:53.015 response: 00:14:53.015 { 00:14:53.015 "code": -32602, 00:14:53.015 "message": "Invalid cntlid range [1-0]" 00:14:53.015 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:53.015 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7489 -I 65520 00:14:53.275 [2024-11-20 14:35:00.194222] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7489: invalid cntlid range [1-65520] 00:14:53.275 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:53.275 { 00:14:53.275 "nqn": "nqn.2016-06.io.spdk:cnode7489", 00:14:53.275 "max_cntlid": 65520, 00:14:53.275 "method": "nvmf_create_subsystem", 00:14:53.275 "req_id": 1 00:14:53.275 } 00:14:53.275 Got JSON-RPC error response 00:14:53.275 response: 00:14:53.275 { 00:14:53.275 "code": -32602, 00:14:53.275 "message": "Invalid cntlid range [1-65520]" 00:14:53.275 }' 00:14:53.275 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:53.275 { 00:14:53.275 "nqn": "nqn.2016-06.io.spdk:cnode7489", 00:14:53.275 "max_cntlid": 65520, 00:14:53.275 "method": "nvmf_create_subsystem", 00:14:53.275 "req_id": 1 00:14:53.275 } 00:14:53.275 Got JSON-RPC error response 00:14:53.275 response: 00:14:53.275 { 00:14:53.275 "code": -32602, 00:14:53.275 "message": "Invalid cntlid range [1-65520]" 00:14:53.275 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:53.275 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23411 -i 6 -I 5 00:14:53.534 [2024-11-20 14:35:00.354721] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23411: invalid cntlid range [6-5] 00:14:53.534 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:53.534 { 00:14:53.534 "nqn": "nqn.2016-06.io.spdk:cnode23411", 00:14:53.534 "min_cntlid": 6, 00:14:53.534 "max_cntlid": 5, 00:14:53.534 "method": "nvmf_create_subsystem", 00:14:53.534 "req_id": 1 00:14:53.534 } 00:14:53.534 Got JSON-RPC error response 00:14:53.534 response: 00:14:53.534 { 00:14:53.534 "code": -32602, 00:14:53.534 "message": "Invalid cntlid range [6-5]" 00:14:53.534 }' 00:14:53.534 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:53.534 { 00:14:53.534 "nqn": "nqn.2016-06.io.spdk:cnode23411", 00:14:53.534 "min_cntlid": 6, 00:14:53.534 "max_cntlid": 5, 00:14:53.534 "method": "nvmf_create_subsystem", 00:14:53.534 "req_id": 1 00:14:53.534 } 00:14:53.534 Got JSON-RPC error response 00:14:53.534 response: 00:14:53.534 { 00:14:53.534 "code": -32602, 00:14:53.534 "message": "Invalid cntlid range [6-5]" 00:14:53.534 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:53.535 { 00:14:53.535 "name": "foobar", 00:14:53.535 "method": "nvmf_delete_target", 00:14:53.535 "req_id": 1 00:14:53.535 } 00:14:53.535 Got JSON-RPC error response 00:14:53.535 response: 00:14:53.535 { 00:14:53.535 "code": -32602, 00:14:53.535 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:53.535 }' 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:53.535 { 00:14:53.535 "name": "foobar", 00:14:53.535 "method": "nvmf_delete_target", 00:14:53.535 "req_id": 1 00:14:53.535 } 00:14:53.535 Got JSON-RPC error response 00:14:53.535 response: 00:14:53.535 { 00:14:53.535 "code": -32602, 00:14:53.535 "message": "The specified target doesn't exist, cannot delete it." 00:14:53.535 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:53.535 rmmod nvme_tcp 00:14:53.535 rmmod nvme_fabrics 00:14:53.535 rmmod nvme_keyring 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3826924 ']' 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3826924 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3826924 ']' 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3826924 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3826924 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3826924' 00:14:53.535 killing process with pid 3826924 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3826924 00:14:53.535 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3826924 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.794 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.700 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:55.700 00:14:55.700 real 0m11.105s 00:14:55.700 user 0m16.854s 00:14:55.700 sys 0m4.862s 00:14:55.700 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.700 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:55.700 ************************************ 00:14:55.700 END TEST nvmf_invalid 00:14:55.700 ************************************ 00:14:55.700 14:35:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:55.700 14:35:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:55.700 14:35:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.700 14:35:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.960 ************************************ 00:14:55.960 START TEST nvmf_connect_stress 00:14:55.960 ************************************ 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:55.960 * Looking for test storage... 00:14:55.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.960 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:55.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.961 --rc genhtml_branch_coverage=1 00:14:55.961 --rc genhtml_function_coverage=1 00:14:55.961 --rc genhtml_legend=1 00:14:55.961 --rc geninfo_all_blocks=1 00:14:55.961 --rc geninfo_unexecuted_blocks=1 00:14:55.961 00:14:55.961 ' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:55.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.961 --rc genhtml_branch_coverage=1 00:14:55.961 --rc genhtml_function_coverage=1 00:14:55.961 --rc genhtml_legend=1 00:14:55.961 --rc geninfo_all_blocks=1 00:14:55.961 --rc geninfo_unexecuted_blocks=1 00:14:55.961 00:14:55.961 ' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:55.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.961 --rc genhtml_branch_coverage=1 00:14:55.961 --rc genhtml_function_coverage=1 00:14:55.961 --rc genhtml_legend=1 00:14:55.961 --rc geninfo_all_blocks=1 00:14:55.961 --rc geninfo_unexecuted_blocks=1 00:14:55.961 00:14:55.961 ' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:55.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.961 --rc genhtml_branch_coverage=1 00:14:55.961 --rc genhtml_function_coverage=1 00:14:55.961 --rc genhtml_legend=1 00:14:55.961 --rc geninfo_all_blocks=1 00:14:55.961 --rc geninfo_unexecuted_blocks=1 00:14:55.961 00:14:55.961 ' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:55.961 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:55.962 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:01.234 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:01.234 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:01.234 Found net devices under 0000:31:00.0: cvl_0_0 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.234 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:01.235 Found net devices under 0000:31:00.1: cvl_0_1 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.235 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:01.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:15:01.235 00:15:01.235 --- 10.0.0.2 ping statistics --- 00:15:01.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.235 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:15:01.235 00:15:01.235 --- 10.0.0.1 ping statistics --- 00:15:01.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.235 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3832194 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3832194 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3832194 ']' 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.235 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.495 [2024-11-20 14:35:08.311337] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:15:01.495 [2024-11-20 14:35:08.311404] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.495 [2024-11-20 14:35:08.403603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:01.495 [2024-11-20 14:35:08.455054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.495 [2024-11-20 14:35:08.455104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.495 [2024-11-20 14:35:08.455114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.495 [2024-11-20 14:35:08.455123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.495 [2024-11-20 14:35:08.455130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.495 [2024-11-20 14:35:08.457046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.495 [2024-11-20 14:35:08.457207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.495 [2024-11-20 14:35:08.457209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.062 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.062 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:02.062 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:02.062 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:02.062 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.062 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.062 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.062 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.062 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.322 [2024-11-20 14:35:09.129202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.322 [2024-11-20 14:35:09.146138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.322 NULL1 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3832463 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.322 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.582 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.582 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:02.582 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.582 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.582 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.841 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.841 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:02.841 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.841 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.841 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.409 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.409 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:03.409 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.409 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.409 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.667 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.667 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:03.667 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.667 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.667 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.927 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.927 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:03.927 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.927 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.927 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.186 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.186 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:04.186 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.186 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.186 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.445 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.445 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:04.445 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.445 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.445 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.012 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.012 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:05.012 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.012 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.012 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.271 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.271 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:05.271 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.271 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.271 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.530 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.530 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:05.530 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.530 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.530 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.790 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.790 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:05.790 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.790 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.790 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.050 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.050 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:06.050 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.050 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.050 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.621 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.621 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:06.621 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.621 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.621 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.880 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.880 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:06.880 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.880 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.880 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.140 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.140 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:07.140 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.140 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.140 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.400 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.400 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:07.400 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.400 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.400 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.660 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.660 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:07.660 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.660 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.660 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.230 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.230 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:08.230 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.230 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.230 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.490 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.490 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:08.490 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.490 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.490 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.748 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.748 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:08.748 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.748 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.748 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.007 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.007 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:09.007 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.007 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.007 14:35:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.266 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.266 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:09.266 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.266 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.266 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.834 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.834 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:09.834 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.834 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.834 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.093 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.093 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:10.093 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.093 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.093 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.351 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.351 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:10.351 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.351 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.351 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.609 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.609 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:10.609 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.609 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.609 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.869 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.869 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:10.869 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.869 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.869 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.437 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.437 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:11.437 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.437 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.437 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.698 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.698 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:11.698 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.698 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.698 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.957 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.957 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:11.957 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.957 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.957 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.216 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.216 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:12.216 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.216 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.216 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.475 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3832463 00:15:12.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3832463) - No such process 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3832463 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:12.475 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:12.475 rmmod nvme_tcp 00:15:12.475 rmmod nvme_fabrics 00:15:12.475 rmmod nvme_keyring 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3832194 ']' 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3832194 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3832194 ']' 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3832194 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3832194 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3832194' 00:15:12.734 killing process with pid 3832194 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3832194 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3832194 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.734 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.728 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:14.728 00:15:14.728 real 0m18.982s 00:15:14.728 user 0m41.721s 00:15:14.728 sys 0m7.377s 00:15:14.728 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.728 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.728 ************************************ 00:15:14.728 END TEST nvmf_connect_stress 00:15:14.728 ************************************ 00:15:14.728 14:35:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:14.728 14:35:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:14.728 14:35:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.728 14:35:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:14.989 ************************************ 00:15:14.989 START TEST nvmf_fused_ordering 00:15:14.989 ************************************ 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:14.989 * Looking for test storage... 00:15:14.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:14.989 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:14.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.990 --rc genhtml_branch_coverage=1 00:15:14.990 --rc genhtml_function_coverage=1 00:15:14.990 --rc genhtml_legend=1 00:15:14.990 --rc geninfo_all_blocks=1 00:15:14.990 --rc geninfo_unexecuted_blocks=1 00:15:14.990 00:15:14.990 ' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:14.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.990 --rc genhtml_branch_coverage=1 00:15:14.990 --rc genhtml_function_coverage=1 00:15:14.990 --rc genhtml_legend=1 00:15:14.990 --rc geninfo_all_blocks=1 00:15:14.990 --rc geninfo_unexecuted_blocks=1 00:15:14.990 00:15:14.990 ' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:14.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.990 --rc genhtml_branch_coverage=1 00:15:14.990 --rc genhtml_function_coverage=1 00:15:14.990 --rc genhtml_legend=1 00:15:14.990 --rc geninfo_all_blocks=1 00:15:14.990 --rc geninfo_unexecuted_blocks=1 00:15:14.990 00:15:14.990 ' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:14.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.990 --rc genhtml_branch_coverage=1 00:15:14.990 --rc genhtml_function_coverage=1 00:15:14.990 --rc genhtml_legend=1 00:15:14.990 --rc geninfo_all_blocks=1 00:15:14.990 --rc geninfo_unexecuted_blocks=1 00:15:14.990 00:15:14.990 ' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.990 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:14.991 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:14.991 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:14.991 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:21.570 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:21.570 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:21.570 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:21.571 Found net devices under 0000:31:00.0: cvl_0_0 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:21.571 Found net devices under 0000:31:00.1: cvl_0_1 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:21.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:15:21.571 00:15:21.571 --- 10.0.0.2 ping statistics --- 00:15:21.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.571 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:15:21.571 00:15:21.571 --- 10.0.0.1 ping statistics --- 00:15:21.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.571 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3839160 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3839160 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3839160 ']' 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.571 [2024-11-20 14:35:27.663611] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:15:21.571 [2024-11-20 14:35:27.663680] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.571 [2024-11-20 14:35:27.740826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.571 [2024-11-20 14:35:27.777062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.571 [2024-11-20 14:35:27.777095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.571 [2024-11-20 14:35:27.777101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.571 [2024-11-20 14:35:27.777106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.571 [2024-11-20 14:35:27.777110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.571 [2024-11-20 14:35:27.777679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.571 [2024-11-20 14:35:27.883756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.571 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 [2024-11-20 14:35:27.899942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 NULL1 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.572 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:21.572 [2024-11-20 14:35:27.942972] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:15:21.572 [2024-11-20 14:35:27.943000] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839179 ] 00:15:21.572 Attached to nqn.2016-06.io.spdk:cnode1 00:15:21.572 Namespace ID: 1 size: 1GB 00:15:21.572 fused_ordering(0) 00:15:21.572 fused_ordering(1) 00:15:21.572 fused_ordering(2) 00:15:21.572 fused_ordering(3) 00:15:21.572 fused_ordering(4) 00:15:21.572 fused_ordering(5) 00:15:21.572 fused_ordering(6) 00:15:21.572 fused_ordering(7) 00:15:21.572 fused_ordering(8) 00:15:21.572 fused_ordering(9) 00:15:21.572 fused_ordering(10) 00:15:21.572 fused_ordering(11) 00:15:21.572 fused_ordering(12) 00:15:21.572 fused_ordering(13) 00:15:21.572 fused_ordering(14) 00:15:21.572 fused_ordering(15) 00:15:21.572 fused_ordering(16) 00:15:21.572 fused_ordering(17) 00:15:21.572 fused_ordering(18) 00:15:21.572 fused_ordering(19) 00:15:21.572 fused_ordering(20) 00:15:21.572 fused_ordering(21) 00:15:21.572 fused_ordering(22) 00:15:21.572 fused_ordering(23) 00:15:21.572 fused_ordering(24) 00:15:21.572 fused_ordering(25) 00:15:21.572 fused_ordering(26) 00:15:21.572 fused_ordering(27) 00:15:21.572 fused_ordering(28) 00:15:21.572 fused_ordering(29) 00:15:21.572 fused_ordering(30) 00:15:21.572 fused_ordering(31) 00:15:21.572 fused_ordering(32) 00:15:21.572 fused_ordering(33) 00:15:21.572 fused_ordering(34) 00:15:21.572 fused_ordering(35) 00:15:21.572 fused_ordering(36) 00:15:21.572 fused_ordering(37) 00:15:21.572 fused_ordering(38) 00:15:21.572 fused_ordering(39) 00:15:21.572 fused_ordering(40) 00:15:21.572 fused_ordering(41) 00:15:21.572 fused_ordering(42) 00:15:21.572 fused_ordering(43) 00:15:21.572 fused_ordering(44) 00:15:21.572 fused_ordering(45) 00:15:21.572 fused_ordering(46) 00:15:21.572 fused_ordering(47) 00:15:21.572 fused_ordering(48) 00:15:21.572 fused_ordering(49) 00:15:21.572 fused_ordering(50) 00:15:21.572 fused_ordering(51) 00:15:21.572 fused_ordering(52) 00:15:21.572 fused_ordering(53) 00:15:21.572 fused_ordering(54) 00:15:21.572 fused_ordering(55) 00:15:21.572 fused_ordering(56) 00:15:21.572 fused_ordering(57) 00:15:21.572 fused_ordering(58) 00:15:21.572 fused_ordering(59) 00:15:21.572 fused_ordering(60) 00:15:21.572 fused_ordering(61) 00:15:21.572 fused_ordering(62) 00:15:21.572 fused_ordering(63) 00:15:21.572 fused_ordering(64) 00:15:21.572 fused_ordering(65) 00:15:21.572 fused_ordering(66) 00:15:21.572 fused_ordering(67) 00:15:21.572 fused_ordering(68) 00:15:21.572 fused_ordering(69) 00:15:21.572 fused_ordering(70) 00:15:21.572 fused_ordering(71) 00:15:21.572 fused_ordering(72) 00:15:21.572 fused_ordering(73) 00:15:21.572 fused_ordering(74) 00:15:21.572 fused_ordering(75) 00:15:21.572 fused_ordering(76) 00:15:21.572 fused_ordering(77) 00:15:21.572 fused_ordering(78) 00:15:21.572 fused_ordering(79) 00:15:21.572 fused_ordering(80) 00:15:21.572 fused_ordering(81) 00:15:21.572 fused_ordering(82) 00:15:21.572 fused_ordering(83) 00:15:21.572 fused_ordering(84) 00:15:21.572 fused_ordering(85) 00:15:21.572 fused_ordering(86) 00:15:21.572 fused_ordering(87) 00:15:21.572 fused_ordering(88) 00:15:21.572 fused_ordering(89) 00:15:21.572 fused_ordering(90) 00:15:21.572 fused_ordering(91) 00:15:21.572 fused_ordering(92) 00:15:21.572 fused_ordering(93) 00:15:21.572 fused_ordering(94) 00:15:21.572 fused_ordering(95) 00:15:21.572 fused_ordering(96) 00:15:21.572 fused_ordering(97) 00:15:21.572 fused_ordering(98) 00:15:21.572 fused_ordering(99) 00:15:21.572 fused_ordering(100) 00:15:21.572 fused_ordering(101) 00:15:21.572 fused_ordering(102) 00:15:21.572 fused_ordering(103) 00:15:21.572 fused_ordering(104) 00:15:21.572 fused_ordering(105) 00:15:21.572 fused_ordering(106) 00:15:21.572 fused_ordering(107) 00:15:21.572 fused_ordering(108) 00:15:21.572 fused_ordering(109) 00:15:21.572 fused_ordering(110) 00:15:21.572 fused_ordering(111) 00:15:21.572 fused_ordering(112) 00:15:21.572 fused_ordering(113) 00:15:21.572 fused_ordering(114) 00:15:21.572 fused_ordering(115) 00:15:21.572 fused_ordering(116) 00:15:21.572 fused_ordering(117) 00:15:21.572 fused_ordering(118) 00:15:21.572 fused_ordering(119) 00:15:21.572 fused_ordering(120) 00:15:21.572 fused_ordering(121) 00:15:21.572 fused_ordering(122) 00:15:21.572 fused_ordering(123) 00:15:21.572 fused_ordering(124) 00:15:21.572 fused_ordering(125) 00:15:21.572 fused_ordering(126) 00:15:21.572 fused_ordering(127) 00:15:21.572 fused_ordering(128) 00:15:21.572 fused_ordering(129) 00:15:21.572 fused_ordering(130) 00:15:21.572 fused_ordering(131) 00:15:21.572 fused_ordering(132) 00:15:21.572 fused_ordering(133) 00:15:21.572 fused_ordering(134) 00:15:21.572 fused_ordering(135) 00:15:21.572 fused_ordering(136) 00:15:21.572 fused_ordering(137) 00:15:21.572 fused_ordering(138) 00:15:21.572 fused_ordering(139) 00:15:21.572 fused_ordering(140) 00:15:21.572 fused_ordering(141) 00:15:21.572 fused_ordering(142) 00:15:21.572 fused_ordering(143) 00:15:21.572 fused_ordering(144) 00:15:21.572 fused_ordering(145) 00:15:21.572 fused_ordering(146) 00:15:21.572 fused_ordering(147) 00:15:21.572 fused_ordering(148) 00:15:21.572 fused_ordering(149) 00:15:21.572 fused_ordering(150) 00:15:21.572 fused_ordering(151) 00:15:21.572 fused_ordering(152) 00:15:21.572 fused_ordering(153) 00:15:21.572 fused_ordering(154) 00:15:21.572 fused_ordering(155) 00:15:21.572 fused_ordering(156) 00:15:21.572 fused_ordering(157) 00:15:21.572 fused_ordering(158) 00:15:21.572 fused_ordering(159) 00:15:21.572 fused_ordering(160) 00:15:21.572 fused_ordering(161) 00:15:21.572 fused_ordering(162) 00:15:21.572 fused_ordering(163) 00:15:21.572 fused_ordering(164) 00:15:21.572 fused_ordering(165) 00:15:21.572 fused_ordering(166) 00:15:21.572 fused_ordering(167) 00:15:21.572 fused_ordering(168) 00:15:21.572 fused_ordering(169) 00:15:21.572 fused_ordering(170) 00:15:21.572 fused_ordering(171) 00:15:21.572 fused_ordering(172) 00:15:21.572 fused_ordering(173) 00:15:21.572 fused_ordering(174) 00:15:21.572 fused_ordering(175) 00:15:21.572 fused_ordering(176) 00:15:21.572 fused_ordering(177) 00:15:21.572 fused_ordering(178) 00:15:21.572 fused_ordering(179) 00:15:21.572 fused_ordering(180) 00:15:21.572 fused_ordering(181) 00:15:21.572 fused_ordering(182) 00:15:21.572 fused_ordering(183) 00:15:21.572 fused_ordering(184) 00:15:21.572 fused_ordering(185) 00:15:21.572 fused_ordering(186) 00:15:21.572 fused_ordering(187) 00:15:21.572 fused_ordering(188) 00:15:21.572 fused_ordering(189) 00:15:21.572 fused_ordering(190) 00:15:21.572 fused_ordering(191) 00:15:21.572 fused_ordering(192) 00:15:21.572 fused_ordering(193) 00:15:21.572 fused_ordering(194) 00:15:21.572 fused_ordering(195) 00:15:21.572 fused_ordering(196) 00:15:21.572 fused_ordering(197) 00:15:21.572 fused_ordering(198) 00:15:21.572 fused_ordering(199) 00:15:21.572 fused_ordering(200) 00:15:21.572 fused_ordering(201) 00:15:21.572 fused_ordering(202) 00:15:21.572 fused_ordering(203) 00:15:21.572 fused_ordering(204) 00:15:21.572 fused_ordering(205) 00:15:21.833 fused_ordering(206) 00:15:21.833 fused_ordering(207) 00:15:21.833 fused_ordering(208) 00:15:21.833 fused_ordering(209) 00:15:21.833 fused_ordering(210) 00:15:21.833 fused_ordering(211) 00:15:21.833 fused_ordering(212) 00:15:21.833 fused_ordering(213) 00:15:21.833 fused_ordering(214) 00:15:21.833 fused_ordering(215) 00:15:21.833 fused_ordering(216) 00:15:21.833 fused_ordering(217) 00:15:21.833 fused_ordering(218) 00:15:21.833 fused_ordering(219) 00:15:21.833 fused_ordering(220) 00:15:21.833 fused_ordering(221) 00:15:21.833 fused_ordering(222) 00:15:21.833 fused_ordering(223) 00:15:21.833 fused_ordering(224) 00:15:21.833 fused_ordering(225) 00:15:21.833 fused_ordering(226) 00:15:21.833 fused_ordering(227) 00:15:21.833 fused_ordering(228) 00:15:21.833 fused_ordering(229) 00:15:21.833 fused_ordering(230) 00:15:21.833 fused_ordering(231) 00:15:21.833 fused_ordering(232) 00:15:21.833 fused_ordering(233) 00:15:21.833 fused_ordering(234) 00:15:21.833 fused_ordering(235) 00:15:21.833 fused_ordering(236) 00:15:21.833 fused_ordering(237) 00:15:21.833 fused_ordering(238) 00:15:21.833 fused_ordering(239) 00:15:21.833 fused_ordering(240) 00:15:21.833 fused_ordering(241) 00:15:21.833 fused_ordering(242) 00:15:21.833 fused_ordering(243) 00:15:21.833 fused_ordering(244) 00:15:21.833 fused_ordering(245) 00:15:21.833 fused_ordering(246) 00:15:21.833 fused_ordering(247) 00:15:21.833 fused_ordering(248) 00:15:21.833 fused_ordering(249) 00:15:21.833 fused_ordering(250) 00:15:21.833 fused_ordering(251) 00:15:21.833 fused_ordering(252) 00:15:21.833 fused_ordering(253) 00:15:21.833 fused_ordering(254) 00:15:21.833 fused_ordering(255) 00:15:21.833 fused_ordering(256) 00:15:21.833 fused_ordering(257) 00:15:21.833 fused_ordering(258) 00:15:21.833 fused_ordering(259) 00:15:21.833 fused_ordering(260) 00:15:21.833 fused_ordering(261) 00:15:21.833 fused_ordering(262) 00:15:21.833 fused_ordering(263) 00:15:21.833 fused_ordering(264) 00:15:21.833 fused_ordering(265) 00:15:21.833 fused_ordering(266) 00:15:21.833 fused_ordering(267) 00:15:21.833 fused_ordering(268) 00:15:21.833 fused_ordering(269) 00:15:21.833 fused_ordering(270) 00:15:21.833 fused_ordering(271) 00:15:21.833 fused_ordering(272) 00:15:21.833 fused_ordering(273) 00:15:21.833 fused_ordering(274) 00:15:21.833 fused_ordering(275) 00:15:21.833 fused_ordering(276) 00:15:21.833 fused_ordering(277) 00:15:21.833 fused_ordering(278) 00:15:21.833 fused_ordering(279) 00:15:21.833 fused_ordering(280) 00:15:21.833 fused_ordering(281) 00:15:21.833 fused_ordering(282) 00:15:21.833 fused_ordering(283) 00:15:21.833 fused_ordering(284) 00:15:21.833 fused_ordering(285) 00:15:21.833 fused_ordering(286) 00:15:21.833 fused_ordering(287) 00:15:21.833 fused_ordering(288) 00:15:21.833 fused_ordering(289) 00:15:21.833 fused_ordering(290) 00:15:21.833 fused_ordering(291) 00:15:21.833 fused_ordering(292) 00:15:21.833 fused_ordering(293) 00:15:21.833 fused_ordering(294) 00:15:21.833 fused_ordering(295) 00:15:21.833 fused_ordering(296) 00:15:21.833 fused_ordering(297) 00:15:21.833 fused_ordering(298) 00:15:21.833 fused_ordering(299) 00:15:21.833 fused_ordering(300) 00:15:21.833 fused_ordering(301) 00:15:21.833 fused_ordering(302) 00:15:21.833 fused_ordering(303) 00:15:21.833 fused_ordering(304) 00:15:21.833 fused_ordering(305) 00:15:21.833 fused_ordering(306) 00:15:21.833 fused_ordering(307) 00:15:21.833 fused_ordering(308) 00:15:21.833 fused_ordering(309) 00:15:21.833 fused_ordering(310) 00:15:21.833 fused_ordering(311) 00:15:21.833 fused_ordering(312) 00:15:21.833 fused_ordering(313) 00:15:21.833 fused_ordering(314) 00:15:21.833 fused_ordering(315) 00:15:21.833 fused_ordering(316) 00:15:21.833 fused_ordering(317) 00:15:21.833 fused_ordering(318) 00:15:21.833 fused_ordering(319) 00:15:21.833 fused_ordering(320) 00:15:21.833 fused_ordering(321) 00:15:21.833 fused_ordering(322) 00:15:21.833 fused_ordering(323) 00:15:21.833 fused_ordering(324) 00:15:21.833 fused_ordering(325) 00:15:21.833 fused_ordering(326) 00:15:21.833 fused_ordering(327) 00:15:21.833 fused_ordering(328) 00:15:21.833 fused_ordering(329) 00:15:21.833 fused_ordering(330) 00:15:21.833 fused_ordering(331) 00:15:21.833 fused_ordering(332) 00:15:21.833 fused_ordering(333) 00:15:21.833 fused_ordering(334) 00:15:21.833 fused_ordering(335) 00:15:21.833 fused_ordering(336) 00:15:21.833 fused_ordering(337) 00:15:21.833 fused_ordering(338) 00:15:21.833 fused_ordering(339) 00:15:21.833 fused_ordering(340) 00:15:21.833 fused_ordering(341) 00:15:21.833 fused_ordering(342) 00:15:21.833 fused_ordering(343) 00:15:21.833 fused_ordering(344) 00:15:21.833 fused_ordering(345) 00:15:21.833 fused_ordering(346) 00:15:21.833 fused_ordering(347) 00:15:21.833 fused_ordering(348) 00:15:21.833 fused_ordering(349) 00:15:21.833 fused_ordering(350) 00:15:21.833 fused_ordering(351) 00:15:21.833 fused_ordering(352) 00:15:21.833 fused_ordering(353) 00:15:21.833 fused_ordering(354) 00:15:21.833 fused_ordering(355) 00:15:21.833 fused_ordering(356) 00:15:21.833 fused_ordering(357) 00:15:21.833 fused_ordering(358) 00:15:21.833 fused_ordering(359) 00:15:21.833 fused_ordering(360) 00:15:21.833 fused_ordering(361) 00:15:21.833 fused_ordering(362) 00:15:21.833 fused_ordering(363) 00:15:21.833 fused_ordering(364) 00:15:21.833 fused_ordering(365) 00:15:21.833 fused_ordering(366) 00:15:21.833 fused_ordering(367) 00:15:21.833 fused_ordering(368) 00:15:21.833 fused_ordering(369) 00:15:21.833 fused_ordering(370) 00:15:21.833 fused_ordering(371) 00:15:21.833 fused_ordering(372) 00:15:21.833 fused_ordering(373) 00:15:21.833 fused_ordering(374) 00:15:21.833 fused_ordering(375) 00:15:21.833 fused_ordering(376) 00:15:21.833 fused_ordering(377) 00:15:21.833 fused_ordering(378) 00:15:21.833 fused_ordering(379) 00:15:21.833 fused_ordering(380) 00:15:21.833 fused_ordering(381) 00:15:21.833 fused_ordering(382) 00:15:21.833 fused_ordering(383) 00:15:21.833 fused_ordering(384) 00:15:21.833 fused_ordering(385) 00:15:21.833 fused_ordering(386) 00:15:21.833 fused_ordering(387) 00:15:21.833 fused_ordering(388) 00:15:21.833 fused_ordering(389) 00:15:21.833 fused_ordering(390) 00:15:21.833 fused_ordering(391) 00:15:21.833 fused_ordering(392) 00:15:21.833 fused_ordering(393) 00:15:21.833 fused_ordering(394) 00:15:21.833 fused_ordering(395) 00:15:21.833 fused_ordering(396) 00:15:21.833 fused_ordering(397) 00:15:21.833 fused_ordering(398) 00:15:21.833 fused_ordering(399) 00:15:21.833 fused_ordering(400) 00:15:21.833 fused_ordering(401) 00:15:21.833 fused_ordering(402) 00:15:21.833 fused_ordering(403) 00:15:21.833 fused_ordering(404) 00:15:21.833 fused_ordering(405) 00:15:21.833 fused_ordering(406) 00:15:21.833 fused_ordering(407) 00:15:21.833 fused_ordering(408) 00:15:21.833 fused_ordering(409) 00:15:21.833 fused_ordering(410) 00:15:22.093 fused_ordering(411) 00:15:22.093 fused_ordering(412) 00:15:22.093 fused_ordering(413) 00:15:22.093 fused_ordering(414) 00:15:22.093 fused_ordering(415) 00:15:22.093 fused_ordering(416) 00:15:22.094 fused_ordering(417) 00:15:22.094 fused_ordering(418) 00:15:22.094 fused_ordering(419) 00:15:22.094 fused_ordering(420) 00:15:22.094 fused_ordering(421) 00:15:22.094 fused_ordering(422) 00:15:22.094 fused_ordering(423) 00:15:22.094 fused_ordering(424) 00:15:22.094 fused_ordering(425) 00:15:22.094 fused_ordering(426) 00:15:22.094 fused_ordering(427) 00:15:22.094 fused_ordering(428) 00:15:22.094 fused_ordering(429) 00:15:22.094 fused_ordering(430) 00:15:22.094 fused_ordering(431) 00:15:22.094 fused_ordering(432) 00:15:22.094 fused_ordering(433) 00:15:22.094 fused_ordering(434) 00:15:22.094 fused_ordering(435) 00:15:22.094 fused_ordering(436) 00:15:22.094 fused_ordering(437) 00:15:22.094 fused_ordering(438) 00:15:22.094 fused_ordering(439) 00:15:22.094 fused_ordering(440) 00:15:22.094 fused_ordering(441) 00:15:22.094 fused_ordering(442) 00:15:22.094 fused_ordering(443) 00:15:22.094 fused_ordering(444) 00:15:22.094 fused_ordering(445) 00:15:22.094 fused_ordering(446) 00:15:22.094 fused_ordering(447) 00:15:22.094 fused_ordering(448) 00:15:22.094 fused_ordering(449) 00:15:22.094 fused_ordering(450) 00:15:22.094 fused_ordering(451) 00:15:22.094 fused_ordering(452) 00:15:22.094 fused_ordering(453) 00:15:22.094 fused_ordering(454) 00:15:22.094 fused_ordering(455) 00:15:22.094 fused_ordering(456) 00:15:22.094 fused_ordering(457) 00:15:22.094 fused_ordering(458) 00:15:22.094 fused_ordering(459) 00:15:22.094 fused_ordering(460) 00:15:22.094 fused_ordering(461) 00:15:22.094 fused_ordering(462) 00:15:22.094 fused_ordering(463) 00:15:22.094 fused_ordering(464) 00:15:22.094 fused_ordering(465) 00:15:22.094 fused_ordering(466) 00:15:22.094 fused_ordering(467) 00:15:22.094 fused_ordering(468) 00:15:22.094 fused_ordering(469) 00:15:22.094 fused_ordering(470) 00:15:22.094 fused_ordering(471) 00:15:22.094 fused_ordering(472) 00:15:22.094 fused_ordering(473) 00:15:22.094 fused_ordering(474) 00:15:22.094 fused_ordering(475) 00:15:22.094 fused_ordering(476) 00:15:22.094 fused_ordering(477) 00:15:22.094 fused_ordering(478) 00:15:22.094 fused_ordering(479) 00:15:22.094 fused_ordering(480) 00:15:22.094 fused_ordering(481) 00:15:22.094 fused_ordering(482) 00:15:22.094 fused_ordering(483) 00:15:22.094 fused_ordering(484) 00:15:22.094 fused_ordering(485) 00:15:22.094 fused_ordering(486) 00:15:22.094 fused_ordering(487) 00:15:22.094 fused_ordering(488) 00:15:22.094 fused_ordering(489) 00:15:22.094 fused_ordering(490) 00:15:22.094 fused_ordering(491) 00:15:22.094 fused_ordering(492) 00:15:22.094 fused_ordering(493) 00:15:22.094 fused_ordering(494) 00:15:22.094 fused_ordering(495) 00:15:22.094 fused_ordering(496) 00:15:22.094 fused_ordering(497) 00:15:22.094 fused_ordering(498) 00:15:22.094 fused_ordering(499) 00:15:22.094 fused_ordering(500) 00:15:22.094 fused_ordering(501) 00:15:22.094 fused_ordering(502) 00:15:22.094 fused_ordering(503) 00:15:22.094 fused_ordering(504) 00:15:22.094 fused_ordering(505) 00:15:22.094 fused_ordering(506) 00:15:22.094 fused_ordering(507) 00:15:22.094 fused_ordering(508) 00:15:22.094 fused_ordering(509) 00:15:22.094 fused_ordering(510) 00:15:22.094 fused_ordering(511) 00:15:22.094 fused_ordering(512) 00:15:22.094 fused_ordering(513) 00:15:22.094 fused_ordering(514) 00:15:22.094 fused_ordering(515) 00:15:22.094 fused_ordering(516) 00:15:22.094 fused_ordering(517) 00:15:22.094 fused_ordering(518) 00:15:22.094 fused_ordering(519) 00:15:22.094 fused_ordering(520) 00:15:22.094 fused_ordering(521) 00:15:22.094 fused_ordering(522) 00:15:22.094 fused_ordering(523) 00:15:22.094 fused_ordering(524) 00:15:22.094 fused_ordering(525) 00:15:22.094 fused_ordering(526) 00:15:22.094 fused_ordering(527) 00:15:22.094 fused_ordering(528) 00:15:22.094 fused_ordering(529) 00:15:22.094 fused_ordering(530) 00:15:22.094 fused_ordering(531) 00:15:22.094 fused_ordering(532) 00:15:22.094 fused_ordering(533) 00:15:22.094 fused_ordering(534) 00:15:22.094 fused_ordering(535) 00:15:22.094 fused_ordering(536) 00:15:22.094 fused_ordering(537) 00:15:22.094 fused_ordering(538) 00:15:22.094 fused_ordering(539) 00:15:22.094 fused_ordering(540) 00:15:22.094 fused_ordering(541) 00:15:22.094 fused_ordering(542) 00:15:22.094 fused_ordering(543) 00:15:22.094 fused_ordering(544) 00:15:22.094 fused_ordering(545) 00:15:22.094 fused_ordering(546) 00:15:22.094 fused_ordering(547) 00:15:22.094 fused_ordering(548) 00:15:22.094 fused_ordering(549) 00:15:22.094 fused_ordering(550) 00:15:22.094 fused_ordering(551) 00:15:22.094 fused_ordering(552) 00:15:22.094 fused_ordering(553) 00:15:22.094 fused_ordering(554) 00:15:22.094 fused_ordering(555) 00:15:22.094 fused_ordering(556) 00:15:22.094 fused_ordering(557) 00:15:22.094 fused_ordering(558) 00:15:22.094 fused_ordering(559) 00:15:22.094 fused_ordering(560) 00:15:22.094 fused_ordering(561) 00:15:22.094 fused_ordering(562) 00:15:22.094 fused_ordering(563) 00:15:22.094 fused_ordering(564) 00:15:22.094 fused_ordering(565) 00:15:22.094 fused_ordering(566) 00:15:22.094 fused_ordering(567) 00:15:22.094 fused_ordering(568) 00:15:22.094 fused_ordering(569) 00:15:22.094 fused_ordering(570) 00:15:22.094 fused_ordering(571) 00:15:22.094 fused_ordering(572) 00:15:22.094 fused_ordering(573) 00:15:22.094 fused_ordering(574) 00:15:22.094 fused_ordering(575) 00:15:22.094 fused_ordering(576) 00:15:22.094 fused_ordering(577) 00:15:22.094 fused_ordering(578) 00:15:22.094 fused_ordering(579) 00:15:22.094 fused_ordering(580) 00:15:22.094 fused_ordering(581) 00:15:22.094 fused_ordering(582) 00:15:22.094 fused_ordering(583) 00:15:22.094 fused_ordering(584) 00:15:22.094 fused_ordering(585) 00:15:22.094 fused_ordering(586) 00:15:22.094 fused_ordering(587) 00:15:22.094 fused_ordering(588) 00:15:22.094 fused_ordering(589) 00:15:22.094 fused_ordering(590) 00:15:22.094 fused_ordering(591) 00:15:22.094 fused_ordering(592) 00:15:22.094 fused_ordering(593) 00:15:22.094 fused_ordering(594) 00:15:22.094 fused_ordering(595) 00:15:22.094 fused_ordering(596) 00:15:22.094 fused_ordering(597) 00:15:22.094 fused_ordering(598) 00:15:22.094 fused_ordering(599) 00:15:22.094 fused_ordering(600) 00:15:22.094 fused_ordering(601) 00:15:22.094 fused_ordering(602) 00:15:22.094 fused_ordering(603) 00:15:22.094 fused_ordering(604) 00:15:22.094 fused_ordering(605) 00:15:22.094 fused_ordering(606) 00:15:22.094 fused_ordering(607) 00:15:22.094 fused_ordering(608) 00:15:22.094 fused_ordering(609) 00:15:22.094 fused_ordering(610) 00:15:22.094 fused_ordering(611) 00:15:22.094 fused_ordering(612) 00:15:22.094 fused_ordering(613) 00:15:22.094 fused_ordering(614) 00:15:22.094 fused_ordering(615) 00:15:22.661 fused_ordering(616) 00:15:22.661 fused_ordering(617) 00:15:22.661 fused_ordering(618) 00:15:22.661 fused_ordering(619) 00:15:22.661 fused_ordering(620) 00:15:22.661 fused_ordering(621) 00:15:22.661 fused_ordering(622) 00:15:22.661 fused_ordering(623) 00:15:22.661 fused_ordering(624) 00:15:22.661 fused_ordering(625) 00:15:22.661 fused_ordering(626) 00:15:22.661 fused_ordering(627) 00:15:22.661 fused_ordering(628) 00:15:22.661 fused_ordering(629) 00:15:22.661 fused_ordering(630) 00:15:22.661 fused_ordering(631) 00:15:22.661 fused_ordering(632) 00:15:22.661 fused_ordering(633) 00:15:22.661 fused_ordering(634) 00:15:22.661 fused_ordering(635) 00:15:22.661 fused_ordering(636) 00:15:22.661 fused_ordering(637) 00:15:22.661 fused_ordering(638) 00:15:22.661 fused_ordering(639) 00:15:22.661 fused_ordering(640) 00:15:22.661 fused_ordering(641) 00:15:22.661 fused_ordering(642) 00:15:22.661 fused_ordering(643) 00:15:22.661 fused_ordering(644) 00:15:22.661 fused_ordering(645) 00:15:22.661 fused_ordering(646) 00:15:22.661 fused_ordering(647) 00:15:22.661 fused_ordering(648) 00:15:22.661 fused_ordering(649) 00:15:22.661 fused_ordering(650) 00:15:22.661 fused_ordering(651) 00:15:22.661 fused_ordering(652) 00:15:22.661 fused_ordering(653) 00:15:22.661 fused_ordering(654) 00:15:22.661 fused_ordering(655) 00:15:22.661 fused_ordering(656) 00:15:22.661 fused_ordering(657) 00:15:22.661 fused_ordering(658) 00:15:22.661 fused_ordering(659) 00:15:22.661 fused_ordering(660) 00:15:22.661 fused_ordering(661) 00:15:22.661 fused_ordering(662) 00:15:22.661 fused_ordering(663) 00:15:22.661 fused_ordering(664) 00:15:22.661 fused_ordering(665) 00:15:22.661 fused_ordering(666) 00:15:22.661 fused_ordering(667) 00:15:22.661 fused_ordering(668) 00:15:22.661 fused_ordering(669) 00:15:22.661 fused_ordering(670) 00:15:22.661 fused_ordering(671) 00:15:22.661 fused_ordering(672) 00:15:22.661 fused_ordering(673) 00:15:22.661 fused_ordering(674) 00:15:22.661 fused_ordering(675) 00:15:22.661 fused_ordering(676) 00:15:22.661 fused_ordering(677) 00:15:22.661 fused_ordering(678) 00:15:22.661 fused_ordering(679) 00:15:22.661 fused_ordering(680) 00:15:22.661 fused_ordering(681) 00:15:22.661 fused_ordering(682) 00:15:22.661 fused_ordering(683) 00:15:22.661 fused_ordering(684) 00:15:22.661 fused_ordering(685) 00:15:22.661 fused_ordering(686) 00:15:22.661 fused_ordering(687) 00:15:22.661 fused_ordering(688) 00:15:22.661 fused_ordering(689) 00:15:22.661 fused_ordering(690) 00:15:22.661 fused_ordering(691) 00:15:22.661 fused_ordering(692) 00:15:22.661 fused_ordering(693) 00:15:22.661 fused_ordering(694) 00:15:22.661 fused_ordering(695) 00:15:22.661 fused_ordering(696) 00:15:22.661 fused_ordering(697) 00:15:22.661 fused_ordering(698) 00:15:22.661 fused_ordering(699) 00:15:22.661 fused_ordering(700) 00:15:22.661 fused_ordering(701) 00:15:22.661 fused_ordering(702) 00:15:22.661 fused_ordering(703) 00:15:22.661 fused_ordering(704) 00:15:22.661 fused_ordering(705) 00:15:22.661 fused_ordering(706) 00:15:22.661 fused_ordering(707) 00:15:22.661 fused_ordering(708) 00:15:22.661 fused_ordering(709) 00:15:22.661 fused_ordering(710) 00:15:22.661 fused_ordering(711) 00:15:22.661 fused_ordering(712) 00:15:22.661 fused_ordering(713) 00:15:22.661 fused_ordering(714) 00:15:22.661 fused_ordering(715) 00:15:22.661 fused_ordering(716) 00:15:22.661 fused_ordering(717) 00:15:22.661 fused_ordering(718) 00:15:22.661 fused_ordering(719) 00:15:22.661 fused_ordering(720) 00:15:22.661 fused_ordering(721) 00:15:22.661 fused_ordering(722) 00:15:22.661 fused_ordering(723) 00:15:22.661 fused_ordering(724) 00:15:22.661 fused_ordering(725) 00:15:22.661 fused_ordering(726) 00:15:22.661 fused_ordering(727) 00:15:22.661 fused_ordering(728) 00:15:22.661 fused_ordering(729) 00:15:22.661 fused_ordering(730) 00:15:22.661 fused_ordering(731) 00:15:22.661 fused_ordering(732) 00:15:22.661 fused_ordering(733) 00:15:22.661 fused_ordering(734) 00:15:22.661 fused_ordering(735) 00:15:22.661 fused_ordering(736) 00:15:22.661 fused_ordering(737) 00:15:22.661 fused_ordering(738) 00:15:22.661 fused_ordering(739) 00:15:22.661 fused_ordering(740) 00:15:22.661 fused_ordering(741) 00:15:22.661 fused_ordering(742) 00:15:22.661 fused_ordering(743) 00:15:22.661 fused_ordering(744) 00:15:22.661 fused_ordering(745) 00:15:22.661 fused_ordering(746) 00:15:22.661 fused_ordering(747) 00:15:22.661 fused_ordering(748) 00:15:22.661 fused_ordering(749) 00:15:22.661 fused_ordering(750) 00:15:22.661 fused_ordering(751) 00:15:22.661 fused_ordering(752) 00:15:22.661 fused_ordering(753) 00:15:22.661 fused_ordering(754) 00:15:22.661 fused_ordering(755) 00:15:22.661 fused_ordering(756) 00:15:22.661 fused_ordering(757) 00:15:22.661 fused_ordering(758) 00:15:22.661 fused_ordering(759) 00:15:22.661 fused_ordering(760) 00:15:22.661 fused_ordering(761) 00:15:22.661 fused_ordering(762) 00:15:22.661 fused_ordering(763) 00:15:22.661 fused_ordering(764) 00:15:22.661 fused_ordering(765) 00:15:22.661 fused_ordering(766) 00:15:22.661 fused_ordering(767) 00:15:22.661 fused_ordering(768) 00:15:22.661 fused_ordering(769) 00:15:22.661 fused_ordering(770) 00:15:22.661 fused_ordering(771) 00:15:22.661 fused_ordering(772) 00:15:22.661 fused_ordering(773) 00:15:22.661 fused_ordering(774) 00:15:22.661 fused_ordering(775) 00:15:22.661 fused_ordering(776) 00:15:22.661 fused_ordering(777) 00:15:22.661 fused_ordering(778) 00:15:22.661 fused_ordering(779) 00:15:22.661 fused_ordering(780) 00:15:22.661 fused_ordering(781) 00:15:22.661 fused_ordering(782) 00:15:22.661 fused_ordering(783) 00:15:22.661 fused_ordering(784) 00:15:22.661 fused_ordering(785) 00:15:22.661 fused_ordering(786) 00:15:22.661 fused_ordering(787) 00:15:22.661 fused_ordering(788) 00:15:22.661 fused_ordering(789) 00:15:22.661 fused_ordering(790) 00:15:22.661 fused_ordering(791) 00:15:22.661 fused_ordering(792) 00:15:22.661 fused_ordering(793) 00:15:22.661 fused_ordering(794) 00:15:22.662 fused_ordering(795) 00:15:22.662 fused_ordering(796) 00:15:22.662 fused_ordering(797) 00:15:22.662 fused_ordering(798) 00:15:22.662 fused_ordering(799) 00:15:22.662 fused_ordering(800) 00:15:22.662 fused_ordering(801) 00:15:22.662 fused_ordering(802) 00:15:22.662 fused_ordering(803) 00:15:22.662 fused_ordering(804) 00:15:22.662 fused_ordering(805) 00:15:22.662 fused_ordering(806) 00:15:22.662 fused_ordering(807) 00:15:22.662 fused_ordering(808) 00:15:22.662 fused_ordering(809) 00:15:22.662 fused_ordering(810) 00:15:22.662 fused_ordering(811) 00:15:22.662 fused_ordering(812) 00:15:22.662 fused_ordering(813) 00:15:22.662 fused_ordering(814) 00:15:22.662 fused_ordering(815) 00:15:22.662 fused_ordering(816) 00:15:22.662 fused_ordering(817) 00:15:22.662 fused_ordering(818) 00:15:22.662 fused_ordering(819) 00:15:22.662 fused_ordering(820) 00:15:23.232 fused_ordering(821) 00:15:23.232 fused_ordering(822) 00:15:23.232 fused_ordering(823) 00:15:23.232 fused_ordering(824) 00:15:23.232 fused_ordering(825) 00:15:23.232 fused_ordering(826) 00:15:23.232 fused_ordering(827) 00:15:23.232 fused_ordering(828) 00:15:23.232 fused_ordering(829) 00:15:23.232 fused_ordering(830) 00:15:23.232 fused_ordering(831) 00:15:23.232 fused_ordering(832) 00:15:23.232 fused_ordering(833) 00:15:23.232 fused_ordering(834) 00:15:23.232 fused_ordering(835) 00:15:23.232 fused_ordering(836) 00:15:23.232 fused_ordering(837) 00:15:23.232 fused_ordering(838) 00:15:23.232 fused_ordering(839) 00:15:23.232 fused_ordering(840) 00:15:23.232 fused_ordering(841) 00:15:23.232 fused_ordering(842) 00:15:23.232 fused_ordering(843) 00:15:23.232 fused_ordering(844) 00:15:23.232 fused_ordering(845) 00:15:23.232 fused_ordering(846) 00:15:23.232 fused_ordering(847) 00:15:23.232 fused_ordering(848) 00:15:23.232 fused_ordering(849) 00:15:23.232 fused_ordering(850) 00:15:23.232 fused_ordering(851) 00:15:23.232 fused_ordering(852) 00:15:23.232 fused_ordering(853) 00:15:23.232 fused_ordering(854) 00:15:23.232 fused_ordering(855) 00:15:23.232 fused_ordering(856) 00:15:23.232 fused_ordering(857) 00:15:23.232 fused_ordering(858) 00:15:23.232 fused_ordering(859) 00:15:23.232 fused_ordering(860) 00:15:23.232 fused_ordering(861) 00:15:23.232 fused_ordering(862) 00:15:23.232 fused_ordering(863) 00:15:23.232 fused_ordering(864) 00:15:23.232 fused_ordering(865) 00:15:23.232 fused_ordering(866) 00:15:23.232 fused_ordering(867) 00:15:23.232 fused_ordering(868) 00:15:23.232 fused_ordering(869) 00:15:23.232 fused_ordering(870) 00:15:23.232 fused_ordering(871) 00:15:23.232 fused_ordering(872) 00:15:23.232 fused_ordering(873) 00:15:23.232 fused_ordering(874) 00:15:23.232 fused_ordering(875) 00:15:23.232 fused_ordering(876) 00:15:23.232 fused_ordering(877) 00:15:23.232 fused_ordering(878) 00:15:23.232 fused_ordering(879) 00:15:23.232 fused_ordering(880) 00:15:23.232 fused_ordering(881) 00:15:23.232 fused_ordering(882) 00:15:23.232 fused_ordering(883) 00:15:23.232 fused_ordering(884) 00:15:23.232 fused_ordering(885) 00:15:23.232 fused_ordering(886) 00:15:23.232 fused_ordering(887) 00:15:23.232 fused_ordering(888) 00:15:23.232 fused_ordering(889) 00:15:23.232 fused_ordering(890) 00:15:23.232 fused_ordering(891) 00:15:23.232 fused_ordering(892) 00:15:23.232 fused_ordering(893) 00:15:23.232 fused_ordering(894) 00:15:23.232 fused_ordering(895) 00:15:23.232 fused_ordering(896) 00:15:23.232 fused_ordering(897) 00:15:23.232 fused_ordering(898) 00:15:23.232 fused_ordering(899) 00:15:23.232 fused_ordering(900) 00:15:23.232 fused_ordering(901) 00:15:23.232 fused_ordering(902) 00:15:23.232 fused_ordering(903) 00:15:23.232 fused_ordering(904) 00:15:23.232 fused_ordering(905) 00:15:23.232 fused_ordering(906) 00:15:23.232 fused_ordering(907) 00:15:23.232 fused_ordering(908) 00:15:23.232 fused_ordering(909) 00:15:23.232 fused_ordering(910) 00:15:23.232 fused_ordering(911) 00:15:23.232 fused_ordering(912) 00:15:23.232 fused_ordering(913) 00:15:23.232 fused_ordering(914) 00:15:23.232 fused_ordering(915) 00:15:23.232 fused_ordering(916) 00:15:23.232 fused_ordering(917) 00:15:23.232 fused_ordering(918) 00:15:23.232 fused_ordering(919) 00:15:23.232 fused_ordering(920) 00:15:23.232 fused_ordering(921) 00:15:23.232 fused_ordering(922) 00:15:23.232 fused_ordering(923) 00:15:23.232 fused_ordering(924) 00:15:23.232 fused_ordering(925) 00:15:23.232 fused_ordering(926) 00:15:23.232 fused_ordering(927) 00:15:23.232 fused_ordering(928) 00:15:23.232 fused_ordering(929) 00:15:23.232 fused_ordering(930) 00:15:23.232 fused_ordering(931) 00:15:23.232 fused_ordering(932) 00:15:23.232 fused_ordering(933) 00:15:23.232 fused_ordering(934) 00:15:23.232 fused_ordering(935) 00:15:23.232 fused_ordering(936) 00:15:23.232 fused_ordering(937) 00:15:23.232 fused_ordering(938) 00:15:23.233 fused_ordering(939) 00:15:23.233 fused_ordering(940) 00:15:23.233 fused_ordering(941) 00:15:23.233 fused_ordering(942) 00:15:23.233 fused_ordering(943) 00:15:23.233 fused_ordering(944) 00:15:23.233 fused_ordering(945) 00:15:23.233 fused_ordering(946) 00:15:23.233 fused_ordering(947) 00:15:23.233 fused_ordering(948) 00:15:23.233 fused_ordering(949) 00:15:23.233 fused_ordering(950) 00:15:23.233 fused_ordering(951) 00:15:23.233 fused_ordering(952) 00:15:23.233 fused_ordering(953) 00:15:23.233 fused_ordering(954) 00:15:23.233 fused_ordering(955) 00:15:23.233 fused_ordering(956) 00:15:23.233 fused_ordering(957) 00:15:23.233 fused_ordering(958) 00:15:23.233 fused_ordering(959) 00:15:23.233 fused_ordering(960) 00:15:23.233 fused_ordering(961) 00:15:23.233 fused_ordering(962) 00:15:23.233 fused_ordering(963) 00:15:23.233 fused_ordering(964) 00:15:23.233 fused_ordering(965) 00:15:23.233 fused_ordering(966) 00:15:23.233 fused_ordering(967) 00:15:23.233 fused_ordering(968) 00:15:23.233 fused_ordering(969) 00:15:23.233 fused_ordering(970) 00:15:23.233 fused_ordering(971) 00:15:23.233 fused_ordering(972) 00:15:23.233 fused_ordering(973) 00:15:23.233 fused_ordering(974) 00:15:23.233 fused_ordering(975) 00:15:23.233 fused_ordering(976) 00:15:23.233 fused_ordering(977) 00:15:23.233 fused_ordering(978) 00:15:23.233 fused_ordering(979) 00:15:23.233 fused_ordering(980) 00:15:23.233 fused_ordering(981) 00:15:23.233 fused_ordering(982) 00:15:23.233 fused_ordering(983) 00:15:23.233 fused_ordering(984) 00:15:23.233 fused_ordering(985) 00:15:23.233 fused_ordering(986) 00:15:23.233 fused_ordering(987) 00:15:23.233 fused_ordering(988) 00:15:23.233 fused_ordering(989) 00:15:23.233 fused_ordering(990) 00:15:23.233 fused_ordering(991) 00:15:23.233 fused_ordering(992) 00:15:23.233 fused_ordering(993) 00:15:23.233 fused_ordering(994) 00:15:23.233 fused_ordering(995) 00:15:23.233 fused_ordering(996) 00:15:23.233 fused_ordering(997) 00:15:23.233 fused_ordering(998) 00:15:23.233 fused_ordering(999) 00:15:23.233 fused_ordering(1000) 00:15:23.233 fused_ordering(1001) 00:15:23.233 fused_ordering(1002) 00:15:23.233 fused_ordering(1003) 00:15:23.233 fused_ordering(1004) 00:15:23.233 fused_ordering(1005) 00:15:23.233 fused_ordering(1006) 00:15:23.233 fused_ordering(1007) 00:15:23.233 fused_ordering(1008) 00:15:23.233 fused_ordering(1009) 00:15:23.233 fused_ordering(1010) 00:15:23.233 fused_ordering(1011) 00:15:23.233 fused_ordering(1012) 00:15:23.233 fused_ordering(1013) 00:15:23.233 fused_ordering(1014) 00:15:23.233 fused_ordering(1015) 00:15:23.233 fused_ordering(1016) 00:15:23.233 fused_ordering(1017) 00:15:23.233 fused_ordering(1018) 00:15:23.233 fused_ordering(1019) 00:15:23.233 fused_ordering(1020) 00:15:23.233 fused_ordering(1021) 00:15:23.233 fused_ordering(1022) 00:15:23.233 fused_ordering(1023) 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:23.233 rmmod nvme_tcp 00:15:23.233 rmmod nvme_fabrics 00:15:23.233 rmmod nvme_keyring 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3839160 ']' 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3839160 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3839160 ']' 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3839160 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3839160 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3839160' 00:15:23.233 killing process with pid 3839160 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3839160 00:15:23.233 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3839160 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.493 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.400 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:25.400 00:15:25.400 real 0m10.587s 00:15:25.400 user 0m5.478s 00:15:25.400 sys 0m5.430s 00:15:25.400 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.400 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:25.400 ************************************ 00:15:25.400 END TEST nvmf_fused_ordering 00:15:25.400 ************************************ 00:15:25.400 14:35:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:25.400 14:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:25.400 14:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.400 14:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:25.400 ************************************ 00:15:25.400 START TEST nvmf_ns_masking 00:15:25.400 ************************************ 00:15:25.400 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:25.660 * Looking for test storage... 00:15:25.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:25.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.660 --rc genhtml_branch_coverage=1 00:15:25.660 --rc genhtml_function_coverage=1 00:15:25.660 --rc genhtml_legend=1 00:15:25.660 --rc geninfo_all_blocks=1 00:15:25.660 --rc geninfo_unexecuted_blocks=1 00:15:25.660 00:15:25.660 ' 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:25.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.660 --rc genhtml_branch_coverage=1 00:15:25.660 --rc genhtml_function_coverage=1 00:15:25.660 --rc genhtml_legend=1 00:15:25.660 --rc geninfo_all_blocks=1 00:15:25.660 --rc geninfo_unexecuted_blocks=1 00:15:25.660 00:15:25.660 ' 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:25.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.660 --rc genhtml_branch_coverage=1 00:15:25.660 --rc genhtml_function_coverage=1 00:15:25.660 --rc genhtml_legend=1 00:15:25.660 --rc geninfo_all_blocks=1 00:15:25.660 --rc geninfo_unexecuted_blocks=1 00:15:25.660 00:15:25.660 ' 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:25.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.660 --rc genhtml_branch_coverage=1 00:15:25.660 --rc genhtml_function_coverage=1 00:15:25.660 --rc genhtml_legend=1 00:15:25.660 --rc geninfo_all_blocks=1 00:15:25.660 --rc geninfo_unexecuted_blocks=1 00:15:25.660 00:15:25.660 ' 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.660 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:25.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a2be8767-c28b-4b54-a2a6-cf068b89a194 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=196c206e-b055-46b7-b292-1ea1544026ce 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=12bcae72-ef20-4e66-babc-0936848f6f4d 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:25.661 14:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:30.952 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:30.952 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:30.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:30.953 Found net devices under 0000:31:00.0: cvl_0_0 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:30.953 Found net devices under 0000:31:00.1: cvl_0_1 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:30.953 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.954 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.955 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.955 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:30.955 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:30.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:15:30.955 00:15:30.955 --- 10.0.0.2 ping statistics --- 00:15:30.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.955 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:15:30.955 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:15:30.955 00:15:30.955 --- 10.0.0.1 ping statistics --- 00:15:30.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.955 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:15:30.955 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.955 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:30.955 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:30.955 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.955 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:30.956 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:30.956 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.956 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:30.956 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3844178 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3844178 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3844178 ']' 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:31.222 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:31.222 [2024-11-20 14:35:38.070008] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:15:31.222 [2024-11-20 14:35:38.070075] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.222 [2024-11-20 14:35:38.164360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.222 [2024-11-20 14:35:38.215685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.222 [2024-11-20 14:35:38.215737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.222 [2024-11-20 14:35:38.215745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.222 [2024-11-20 14:35:38.215753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.222 [2024-11-20 14:35:38.215759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.222 [2024-11-20 14:35:38.216556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.161 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.161 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:32.161 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.161 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.161 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:32.161 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.161 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:32.161 [2024-11-20 14:35:39.016448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.161 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:32.161 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:32.161 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:32.161 Malloc1 00:15:32.161 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:32.421 Malloc2 00:15:32.421 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:32.681 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:32.941 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.941 [2024-11-20 14:35:39.907824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.941 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:32.941 14:35:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 12bcae72-ef20-4e66-babc-0936848f6f4d -a 10.0.0.2 -s 4420 -i 4 00:15:33.200 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:33.200 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:33.200 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.200 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:33.200 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:35.107 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:35.107 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:35.107 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.107 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:35.107 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.107 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:35.107 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:35.107 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.366 [ 0]:0x1 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e348308f5646e1939fb58778954a6d 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e348308f5646e1939fb58778954a6d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.366 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.625 [ 0]:0x1 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e348308f5646e1939fb58778954a6d 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e348308f5646e1939fb58778954a6d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:35.625 [ 1]:0x2 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fdb6e0759ee42c1a5388e12b6e6eb62 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fdb6e0759ee42c1a5388e12b6e6eb62 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.625 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.885 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:35.885 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:35.885 14:35:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 12bcae72-ef20-4e66-babc-0936848f6f4d -a 10.0.0.2 -s 4420 -i 4 00:15:36.145 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:36.145 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:36.145 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:36.145 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:36.145 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:36.145 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:38.052 [ 0]:0x2 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:38.052 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fdb6e0759ee42c1a5388e12b6e6eb62 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fdb6e0759ee42c1a5388e12b6e6eb62 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:38.313 [ 0]:0x1 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e348308f5646e1939fb58778954a6d 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e348308f5646e1939fb58778954a6d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:38.313 [ 1]:0x2 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fdb6e0759ee42c1a5388e12b6e6eb62 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fdb6e0759ee42c1a5388e12b6e6eb62 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.313 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.573 [ 0]:0x2 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fdb6e0759ee42c1a5388e12b6e6eb62 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fdb6e0759ee42c1a5388e12b6e6eb62 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:38.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.573 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:38.832 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:38.832 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 12bcae72-ef20-4e66-babc-0936848f6f4d -a 10.0.0.2 -s 4420 -i 4 00:15:38.832 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:38.832 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:38.832 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.832 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:38.832 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:38.832 14:35:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:41.372 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:41.372 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:41.372 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:41.372 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:41.372 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.372 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:41.372 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:41.372 14:35:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:41.372 [ 0]:0x1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e348308f5646e1939fb58778954a6d 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e348308f5646e1939fb58778954a6d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:41.372 [ 1]:0x2 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fdb6e0759ee42c1a5388e12b6e6eb62 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fdb6e0759ee42c1a5388e12b6e6eb62 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:41.372 [ 0]:0x2 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fdb6e0759ee42c1a5388e12b6e6eb62 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fdb6e0759ee42c1a5388e12b6e6eb62 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:41.372 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:41.632 [2024-11-20 14:35:48.446953] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:41.632 request: 00:15:41.632 { 00:15:41.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.632 "nsid": 2, 00:15:41.632 "host": "nqn.2016-06.io.spdk:host1", 00:15:41.632 "method": "nvmf_ns_remove_host", 00:15:41.632 "req_id": 1 00:15:41.632 } 00:15:41.632 Got JSON-RPC error response 00:15:41.632 response: 00:15:41.632 { 00:15:41.632 "code": -32602, 00:15:41.632 "message": "Invalid parameters" 00:15:41.632 } 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:41.632 [ 0]:0x2 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fdb6e0759ee42c1a5388e12b6e6eb62 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fdb6e0759ee42c1a5388e12b6e6eb62 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:41.632 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3846683 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3846683 /var/tmp/host.sock 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3846683 ']' 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:41.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.891 14:35:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:41.891 [2024-11-20 14:35:48.773460] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:15:41.891 [2024-11-20 14:35:48.773516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846683 ] 00:15:41.891 [2024-11-20 14:35:48.851502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.891 [2024-11-20 14:35:48.887325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.827 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.827 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:42.827 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.828 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:42.828 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a2be8767-c28b-4b54-a2a6-cf068b89a194 00:15:42.828 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:42.828 14:35:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A2BE8767C28B4B54A2A6CF068B89A194 -i 00:15:43.087 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 196c206e-b055-46b7-b292-1ea1544026ce 00:15:43.087 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:43.087 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 196C206EB05546B7B2921EA1544026CE -i 00:15:43.345 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:43.345 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:43.603 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:43.603 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:43.862 nvme0n1 00:15:43.862 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:43.862 14:35:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:44.121 nvme1n2 00:15:44.121 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:44.121 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:44.121 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:44.121 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:44.121 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:44.379 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:44.379 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:44.379 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:44.380 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:44.380 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a2be8767-c28b-4b54-a2a6-cf068b89a194 == \a\2\b\e\8\7\6\7\-\c\2\8\b\-\4\b\5\4\-\a\2\a\6\-\c\f\0\6\8\b\8\9\a\1\9\4 ]] 00:15:44.380 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:44.380 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:44.380 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:44.638 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 196c206e-b055-46b7-b292-1ea1544026ce == \1\9\6\c\2\0\6\e\-\b\0\5\5\-\4\6\b\7\-\b\2\9\2\-\1\e\a\1\5\4\4\0\2\6\c\e ]] 00:15:44.638 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid a2be8767-c28b-4b54-a2a6-cf068b89a194 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A2BE8767C28B4B54A2A6CF068B89A194 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A2BE8767C28B4B54A2A6CF068B89A194 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:44.897 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A2BE8767C28B4B54A2A6CF068B89A194 00:15:45.157 [2024-11-20 14:35:52.028708] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:45.157 [2024-11-20 14:35:52.028739] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:45.157 [2024-11-20 14:35:52.028746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.157 request: 00:15:45.157 { 00:15:45.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:45.157 "namespace": { 00:15:45.157 "bdev_name": "invalid", 00:15:45.157 "nsid": 1, 00:15:45.157 "nguid": "A2BE8767C28B4B54A2A6CF068B89A194", 00:15:45.157 "no_auto_visible": false, 00:15:45.157 "hide_metadata": false 00:15:45.157 }, 00:15:45.157 "method": "nvmf_subsystem_add_ns", 00:15:45.157 "req_id": 1 00:15:45.157 } 00:15:45.157 Got JSON-RPC error response 00:15:45.157 response: 00:15:45.157 { 00:15:45.157 "code": -32602, 00:15:45.157 "message": "Invalid parameters" 00:15:45.157 } 00:15:45.157 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:45.157 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:45.157 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:45.157 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:45.157 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid a2be8767-c28b-4b54-a2a6-cf068b89a194 00:15:45.157 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:45.157 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A2BE8767C28B4B54A2A6CF068B89A194 -i 00:15:45.157 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3846683 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3846683 ']' 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3846683 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3846683 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3846683' 00:15:47.695 killing process with pid 3846683 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3846683 00:15:47.695 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3846683 00:15:47.696 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.955 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:47.955 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:47.955 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:47.955 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:47.955 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.955 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:47.955 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.955 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.955 rmmod nvme_tcp 00:15:47.955 rmmod nvme_fabrics 00:15:47.955 rmmod nvme_keyring 00:15:47.955 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3844178 ']' 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3844178 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3844178 ']' 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3844178 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844178 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844178' 00:15:47.956 killing process with pid 3844178 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3844178 00:15:47.956 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3844178 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.956 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:50.497 00:15:50.497 real 0m24.621s 00:15:50.497 user 0m28.655s 00:15:50.497 sys 0m6.100s 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:50.497 ************************************ 00:15:50.497 END TEST nvmf_ns_masking 00:15:50.497 ************************************ 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.497 ************************************ 00:15:50.497 START TEST nvmf_nvme_cli 00:15:50.497 ************************************ 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:50.497 * Looking for test storage... 00:15:50.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:50.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.497 --rc genhtml_branch_coverage=1 00:15:50.497 --rc genhtml_function_coverage=1 00:15:50.497 --rc genhtml_legend=1 00:15:50.497 --rc geninfo_all_blocks=1 00:15:50.497 --rc geninfo_unexecuted_blocks=1 00:15:50.497 00:15:50.497 ' 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:50.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.497 --rc genhtml_branch_coverage=1 00:15:50.497 --rc genhtml_function_coverage=1 00:15:50.497 --rc genhtml_legend=1 00:15:50.497 --rc geninfo_all_blocks=1 00:15:50.497 --rc geninfo_unexecuted_blocks=1 00:15:50.497 00:15:50.497 ' 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:50.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.497 --rc genhtml_branch_coverage=1 00:15:50.497 --rc genhtml_function_coverage=1 00:15:50.497 --rc genhtml_legend=1 00:15:50.497 --rc geninfo_all_blocks=1 00:15:50.497 --rc geninfo_unexecuted_blocks=1 00:15:50.497 00:15:50.497 ' 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:50.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.497 --rc genhtml_branch_coverage=1 00:15:50.497 --rc genhtml_function_coverage=1 00:15:50.497 --rc genhtml_legend=1 00:15:50.497 --rc geninfo_all_blocks=1 00:15:50.497 --rc geninfo_unexecuted_blocks=1 00:15:50.497 00:15:50.497 ' 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.497 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:50.498 14:35:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.768 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:55.769 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:55.769 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:55.769 Found net devices under 0000:31:00.0: cvl_0_0 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:55.769 Found net devices under 0000:31:00.1: cvl_0_1 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.769 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:55.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:15:55.770 00:15:55.770 --- 10.0.0.2 ping statistics --- 00:15:55.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.770 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:15:55.770 00:15:55.770 --- 10.0.0.1 ping statistics --- 00:15:55.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.770 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3852514 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3852514 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3852514 ']' 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.770 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.030 [2024-11-20 14:36:02.827782] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:15:56.030 [2024-11-20 14:36:02.827829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.030 [2024-11-20 14:36:02.909153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.030 [2024-11-20 14:36:02.964044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.030 [2024-11-20 14:36:02.964101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.030 [2024-11-20 14:36:02.964110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.030 [2024-11-20 14:36:02.964118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.030 [2024-11-20 14:36:02.964124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.030 [2024-11-20 14:36:02.966564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.030 [2024-11-20 14:36:02.966725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.030 [2024-11-20 14:36:02.966890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.030 [2024-11-20 14:36:02.966891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.600 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.600 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:56.600 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:56.600 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:56.600 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.859 [2024-11-20 14:36:03.673050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.859 Malloc0 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.859 Malloc1 00:15:56.859 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.860 [2024-11-20 14:36:03.766352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.860 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:15:57.120 00:15:57.120 Discovery Log Number of Records 2, Generation counter 2 00:15:57.120 =====Discovery Log Entry 0====== 00:15:57.120 trtype: tcp 00:15:57.120 adrfam: ipv4 00:15:57.120 subtype: current discovery subsystem 00:15:57.120 treq: not required 00:15:57.120 portid: 0 00:15:57.120 trsvcid: 4420 00:15:57.120 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:57.120 traddr: 10.0.0.2 00:15:57.120 eflags: explicit discovery connections, duplicate discovery information 00:15:57.120 sectype: none 00:15:57.120 =====Discovery Log Entry 1====== 00:15:57.120 trtype: tcp 00:15:57.120 adrfam: ipv4 00:15:57.120 subtype: nvme subsystem 00:15:57.120 treq: not required 00:15:57.120 portid: 0 00:15:57.120 trsvcid: 4420 00:15:57.120 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:57.120 traddr: 10.0.0.2 00:15:57.120 eflags: none 00:15:57.120 sectype: none 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:57.120 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:58.500 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:58.500 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:58.500 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:58.500 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:58.500 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:58.500 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:00.405 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:00.664 /dev/nvme0n2 ]] 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:00.664 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:00.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:00.665 rmmod nvme_tcp 00:16:00.665 rmmod nvme_fabrics 00:16:00.665 rmmod nvme_keyring 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3852514 ']' 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3852514 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3852514 ']' 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3852514 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852514 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852514' 00:16:00.665 killing process with pid 3852514 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3852514 00:16:00.665 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3852514 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.924 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.461 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:03.461 00:16:03.461 real 0m12.792s 00:16:03.461 user 0m20.914s 00:16:03.461 sys 0m4.752s 00:16:03.461 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.461 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.461 ************************************ 00:16:03.461 END TEST nvmf_nvme_cli 00:16:03.461 ************************************ 00:16:03.461 14:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:03.461 14:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:03.461 14:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:03.461 14:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.461 14:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:03.461 ************************************ 00:16:03.461 START TEST nvmf_vfio_user 00:16:03.461 ************************************ 00:16:03.461 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:03.461 * Looking for test storage... 00:16:03.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.461 --rc genhtml_branch_coverage=1 00:16:03.461 --rc genhtml_function_coverage=1 00:16:03.461 --rc genhtml_legend=1 00:16:03.461 --rc geninfo_all_blocks=1 00:16:03.461 --rc geninfo_unexecuted_blocks=1 00:16:03.461 00:16:03.461 ' 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.461 --rc genhtml_branch_coverage=1 00:16:03.461 --rc genhtml_function_coverage=1 00:16:03.461 --rc genhtml_legend=1 00:16:03.461 --rc geninfo_all_blocks=1 00:16:03.461 --rc geninfo_unexecuted_blocks=1 00:16:03.461 00:16:03.461 ' 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.461 --rc genhtml_branch_coverage=1 00:16:03.461 --rc genhtml_function_coverage=1 00:16:03.461 --rc genhtml_legend=1 00:16:03.461 --rc geninfo_all_blocks=1 00:16:03.461 --rc geninfo_unexecuted_blocks=1 00:16:03.461 00:16:03.461 ' 00:16:03.461 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.461 --rc genhtml_branch_coverage=1 00:16:03.461 --rc genhtml_function_coverage=1 00:16:03.461 --rc genhtml_legend=1 00:16:03.461 --rc geninfo_all_blocks=1 00:16:03.461 --rc geninfo_unexecuted_blocks=1 00:16:03.462 00:16:03.462 ' 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:03.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3854760 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3854760' 00:16:03.462 Process pid: 3854760 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3854760 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3854760 ']' 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:03.462 [2024-11-20 14:36:10.136116] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:16:03.462 [2024-11-20 14:36:10.136185] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.462 [2024-11-20 14:36:10.206822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.462 [2024-11-20 14:36:10.245923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.462 [2024-11-20 14:36:10.245957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.462 [2024-11-20 14:36:10.245963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.462 [2024-11-20 14:36:10.245968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.462 [2024-11-20 14:36:10.245972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.462 [2024-11-20 14:36:10.247635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.462 [2024-11-20 14:36:10.247826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.462 [2024-11-20 14:36:10.247977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.462 [2024-11-20 14:36:10.247979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:03.462 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:04.400 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:04.659 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:04.659 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:04.659 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:04.659 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:04.659 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:04.659 Malloc1 00:16:04.659 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:04.917 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:05.177 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:05.177 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:05.177 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:05.177 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:05.436 Malloc2 00:16:05.436 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:05.436 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:05.696 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:05.958 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:05.958 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:05.958 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:05.958 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:05.958 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:05.958 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:05.958 [2024-11-20 14:36:12.810281] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:16:05.958 [2024-11-20 14:36:12.810311] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855332 ] 00:16:05.958 [2024-11-20 14:36:12.848589] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:05.958 [2024-11-20 14:36:12.853833] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:05.958 [2024-11-20 14:36:12.853852] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0575a72000 00:16:05.958 [2024-11-20 14:36:12.854833] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.958 [2024-11-20 14:36:12.855840] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.958 [2024-11-20 14:36:12.856840] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.958 [2024-11-20 14:36:12.857845] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:05.958 [2024-11-20 14:36:12.858847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:05.958 [2024-11-20 14:36:12.859856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.958 [2024-11-20 14:36:12.860858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:05.958 [2024-11-20 14:36:12.861864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.958 [2024-11-20 14:36:12.862866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:05.958 [2024-11-20 14:36:12.862873] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0575a67000 00:16:05.958 [2024-11-20 14:36:12.863789] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:05.958 [2024-11-20 14:36:12.873260] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:05.958 [2024-11-20 14:36:12.873282] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:05.958 [2024-11-20 14:36:12.878969] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:05.958 [2024-11-20 14:36:12.879001] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:05.958 [2024-11-20 14:36:12.879061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:05.958 [2024-11-20 14:36:12.879073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:05.958 [2024-11-20 14:36:12.879077] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:05.958 [2024-11-20 14:36:12.879976] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:05.958 [2024-11-20 14:36:12.879984] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:05.958 [2024-11-20 14:36:12.879990] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:05.958 [2024-11-20 14:36:12.880981] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:05.958 [2024-11-20 14:36:12.880990] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:05.958 [2024-11-20 14:36:12.880995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:05.958 [2024-11-20 14:36:12.881987] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:05.958 [2024-11-20 14:36:12.881994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:05.958 [2024-11-20 14:36:12.882995] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:05.958 [2024-11-20 14:36:12.883001] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:05.958 [2024-11-20 14:36:12.883004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:05.958 [2024-11-20 14:36:12.883009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:05.958 [2024-11-20 14:36:12.883115] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:05.958 [2024-11-20 14:36:12.883119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:05.958 [2024-11-20 14:36:12.883122] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:05.958 [2024-11-20 14:36:12.883993] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:05.958 [2024-11-20 14:36:12.885002] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:05.958 [2024-11-20 14:36:12.886008] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:05.958 [2024-11-20 14:36:12.887007] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:05.958 [2024-11-20 14:36:12.887054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:05.958 [2024-11-20 14:36:12.888019] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:05.958 [2024-11-20 14:36:12.888024] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:05.959 [2024-11-20 14:36:12.888028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888043] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:05.959 [2024-11-20 14:36:12.888048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888058] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:05.959 [2024-11-20 14:36:12.888062] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:05.959 [2024-11-20 14:36:12.888064] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.959 [2024-11-20 14:36:12.888075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888113] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:05.959 [2024-11-20 14:36:12.888117] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:05.959 [2024-11-20 14:36:12.888120] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:05.959 [2024-11-20 14:36:12.888123] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:05.959 [2024-11-20 14:36:12.888130] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:05.959 [2024-11-20 14:36:12.888135] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:05.959 [2024-11-20 14:36:12.888138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.959 [2024-11-20 14:36:12.888176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.959 [2024-11-20 14:36:12.888183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.959 [2024-11-20 14:36:12.888189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.959 [2024-11-20 14:36:12.888192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888218] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:05.959 [2024-11-20 14:36:12.888222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888306] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:05.959 [2024-11-20 14:36:12.888309] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:05.959 [2024-11-20 14:36:12.888311] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.959 [2024-11-20 14:36:12.888316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888334] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:05.959 [2024-11-20 14:36:12.888343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888353] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:05.959 [2024-11-20 14:36:12.888356] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:05.959 [2024-11-20 14:36:12.888358] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.959 [2024-11-20 14:36:12.888363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888396] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:05.959 [2024-11-20 14:36:12.888399] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:05.959 [2024-11-20 14:36:12.888402] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.959 [2024-11-20 14:36:12.888406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888446] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:05.959 [2024-11-20 14:36:12.888449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:05.959 [2024-11-20 14:36:12.888452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:05.959 [2024-11-20 14:36:12.888466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888520] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:05.959 [2024-11-20 14:36:12.888527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:05.959 [2024-11-20 14:36:12.888536] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:05.959 [2024-11-20 14:36:12.888539] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:05.959 [2024-11-20 14:36:12.888541] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:05.959 [2024-11-20 14:36:12.888544] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:05.959 [2024-11-20 14:36:12.888546] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:05.959 [2024-11-20 14:36:12.888551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:05.959 [2024-11-20 14:36:12.888556] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:05.959 [2024-11-20 14:36:12.888559] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:05.959 [2024-11-20 14:36:12.888561] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.959 [2024-11-20 14:36:12.888565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:05.960 [2024-11-20 14:36:12.888571] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:05.960 [2024-11-20 14:36:12.888574] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:05.960 [2024-11-20 14:36:12.888576] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.960 [2024-11-20 14:36:12.888580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:05.960 [2024-11-20 14:36:12.888587] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:05.960 [2024-11-20 14:36:12.888590] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:05.960 [2024-11-20 14:36:12.888592] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.960 [2024-11-20 14:36:12.888596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:05.960 [2024-11-20 14:36:12.888601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:05.960 [2024-11-20 14:36:12.888610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:05.960 [2024-11-20 14:36:12.888619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:05.960 [2024-11-20 14:36:12.888624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:05.960 ===================================================== 00:16:05.960 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:05.960 ===================================================== 00:16:05.960 Controller Capabilities/Features 00:16:05.960 ================================ 00:16:05.960 Vendor ID: 4e58 00:16:05.960 Subsystem Vendor ID: 4e58 00:16:05.960 Serial Number: SPDK1 00:16:05.960 Model Number: SPDK bdev Controller 00:16:05.960 Firmware Version: 25.01 00:16:05.960 Recommended Arb Burst: 6 00:16:05.960 IEEE OUI Identifier: 8d 6b 50 00:16:05.960 Multi-path I/O 00:16:05.960 May have multiple subsystem ports: Yes 00:16:05.960 May have multiple controllers: Yes 00:16:05.960 Associated with SR-IOV VF: No 00:16:05.960 Max Data Transfer Size: 131072 00:16:05.960 Max Number of Namespaces: 32 00:16:05.960 Max Number of I/O Queues: 127 00:16:05.960 NVMe Specification Version (VS): 1.3 00:16:05.960 NVMe Specification Version (Identify): 1.3 00:16:05.960 Maximum Queue Entries: 256 00:16:05.960 Contiguous Queues Required: Yes 00:16:05.960 Arbitration Mechanisms Supported 00:16:05.960 Weighted Round Robin: Not Supported 00:16:05.960 Vendor Specific: Not Supported 00:16:05.960 Reset Timeout: 15000 ms 00:16:05.960 Doorbell Stride: 4 bytes 00:16:05.960 NVM Subsystem Reset: Not Supported 00:16:05.960 Command Sets Supported 00:16:05.960 NVM Command Set: Supported 00:16:05.960 Boot Partition: Not Supported 00:16:05.960 Memory Page Size Minimum: 4096 bytes 00:16:05.960 Memory Page Size Maximum: 4096 bytes 00:16:05.960 Persistent Memory Region: Not Supported 00:16:05.960 Optional Asynchronous Events Supported 00:16:05.960 Namespace Attribute Notices: Supported 00:16:05.960 Firmware Activation Notices: Not Supported 00:16:05.960 ANA Change Notices: Not Supported 00:16:05.960 PLE Aggregate Log Change Notices: Not Supported 00:16:05.960 LBA Status Info Alert Notices: Not Supported 00:16:05.960 EGE Aggregate Log Change Notices: Not Supported 00:16:05.960 Normal NVM Subsystem Shutdown event: Not Supported 00:16:05.960 Zone Descriptor Change Notices: Not Supported 00:16:05.960 Discovery Log Change Notices: Not Supported 00:16:05.960 Controller Attributes 00:16:05.960 128-bit Host Identifier: Supported 00:16:05.960 Non-Operational Permissive Mode: Not Supported 00:16:05.960 NVM Sets: Not Supported 00:16:05.960 Read Recovery Levels: Not Supported 00:16:05.960 Endurance Groups: Not Supported 00:16:05.960 Predictable Latency Mode: Not Supported 00:16:05.960 Traffic Based Keep ALive: Not Supported 00:16:05.960 Namespace Granularity: Not Supported 00:16:05.960 SQ Associations: Not Supported 00:16:05.960 UUID List: Not Supported 00:16:05.960 Multi-Domain Subsystem: Not Supported 00:16:05.960 Fixed Capacity Management: Not Supported 00:16:05.960 Variable Capacity Management: Not Supported 00:16:05.960 Delete Endurance Group: Not Supported 00:16:05.960 Delete NVM Set: Not Supported 00:16:05.960 Extended LBA Formats Supported: Not Supported 00:16:05.960 Flexible Data Placement Supported: Not Supported 00:16:05.960 00:16:05.960 Controller Memory Buffer Support 00:16:05.960 ================================ 00:16:05.960 Supported: No 00:16:05.960 00:16:05.960 Persistent Memory Region Support 00:16:05.960 ================================ 00:16:05.960 Supported: No 00:16:05.960 00:16:05.960 Admin Command Set Attributes 00:16:05.960 ============================ 00:16:05.960 Security Send/Receive: Not Supported 00:16:05.960 Format NVM: Not Supported 00:16:05.960 Firmware Activate/Download: Not Supported 00:16:05.960 Namespace Management: Not Supported 00:16:05.960 Device Self-Test: Not Supported 00:16:05.960 Directives: Not Supported 00:16:05.960 NVMe-MI: Not Supported 00:16:05.960 Virtualization Management: Not Supported 00:16:05.960 Doorbell Buffer Config: Not Supported 00:16:05.960 Get LBA Status Capability: Not Supported 00:16:05.960 Command & Feature Lockdown Capability: Not Supported 00:16:05.960 Abort Command Limit: 4 00:16:05.960 Async Event Request Limit: 4 00:16:05.960 Number of Firmware Slots: N/A 00:16:05.960 Firmware Slot 1 Read-Only: N/A 00:16:05.960 Firmware Activation Without Reset: N/A 00:16:05.960 Multiple Update Detection Support: N/A 00:16:05.960 Firmware Update Granularity: No Information Provided 00:16:05.960 Per-Namespace SMART Log: No 00:16:05.960 Asymmetric Namespace Access Log Page: Not Supported 00:16:05.960 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:05.960 Command Effects Log Page: Supported 00:16:05.960 Get Log Page Extended Data: Supported 00:16:05.960 Telemetry Log Pages: Not Supported 00:16:05.960 Persistent Event Log Pages: Not Supported 00:16:05.960 Supported Log Pages Log Page: May Support 00:16:05.960 Commands Supported & Effects Log Page: Not Supported 00:16:05.960 Feature Identifiers & Effects Log Page:May Support 00:16:05.960 NVMe-MI Commands & Effects Log Page: May Support 00:16:05.960 Data Area 4 for Telemetry Log: Not Supported 00:16:05.960 Error Log Page Entries Supported: 128 00:16:05.960 Keep Alive: Supported 00:16:05.960 Keep Alive Granularity: 10000 ms 00:16:05.960 00:16:05.960 NVM Command Set Attributes 00:16:05.960 ========================== 00:16:05.960 Submission Queue Entry Size 00:16:05.960 Max: 64 00:16:05.960 Min: 64 00:16:05.960 Completion Queue Entry Size 00:16:05.960 Max: 16 00:16:05.960 Min: 16 00:16:05.960 Number of Namespaces: 32 00:16:05.960 Compare Command: Supported 00:16:05.960 Write Uncorrectable Command: Not Supported 00:16:05.960 Dataset Management Command: Supported 00:16:05.960 Write Zeroes Command: Supported 00:16:05.960 Set Features Save Field: Not Supported 00:16:05.960 Reservations: Not Supported 00:16:05.960 Timestamp: Not Supported 00:16:05.960 Copy: Supported 00:16:05.960 Volatile Write Cache: Present 00:16:05.960 Atomic Write Unit (Normal): 1 00:16:05.960 Atomic Write Unit (PFail): 1 00:16:05.960 Atomic Compare & Write Unit: 1 00:16:05.960 Fused Compare & Write: Supported 00:16:05.960 Scatter-Gather List 00:16:05.960 SGL Command Set: Supported (Dword aligned) 00:16:05.960 SGL Keyed: Not Supported 00:16:05.960 SGL Bit Bucket Descriptor: Not Supported 00:16:05.960 SGL Metadata Pointer: Not Supported 00:16:05.960 Oversized SGL: Not Supported 00:16:05.960 SGL Metadata Address: Not Supported 00:16:05.960 SGL Offset: Not Supported 00:16:05.960 Transport SGL Data Block: Not Supported 00:16:05.960 Replay Protected Memory Block: Not Supported 00:16:05.960 00:16:05.960 Firmware Slot Information 00:16:05.960 ========================= 00:16:05.960 Active slot: 1 00:16:05.960 Slot 1 Firmware Revision: 25.01 00:16:05.960 00:16:05.960 00:16:05.960 Commands Supported and Effects 00:16:05.960 ============================== 00:16:05.960 Admin Commands 00:16:05.960 -------------- 00:16:05.960 Get Log Page (02h): Supported 00:16:05.960 Identify (06h): Supported 00:16:05.960 Abort (08h): Supported 00:16:05.960 Set Features (09h): Supported 00:16:05.960 Get Features (0Ah): Supported 00:16:05.960 Asynchronous Event Request (0Ch): Supported 00:16:05.960 Keep Alive (18h): Supported 00:16:05.960 I/O Commands 00:16:05.960 ------------ 00:16:05.960 Flush (00h): Supported LBA-Change 00:16:05.960 Write (01h): Supported LBA-Change 00:16:05.960 Read (02h): Supported 00:16:05.960 Compare (05h): Supported 00:16:05.960 Write Zeroes (08h): Supported LBA-Change 00:16:05.960 Dataset Management (09h): Supported LBA-Change 00:16:05.961 Copy (19h): Supported LBA-Change 00:16:05.961 00:16:05.961 Error Log 00:16:05.961 ========= 00:16:05.961 00:16:05.961 Arbitration 00:16:05.961 =========== 00:16:05.961 Arbitration Burst: 1 00:16:05.961 00:16:05.961 Power Management 00:16:05.961 ================ 00:16:05.961 Number of Power States: 1 00:16:05.961 Current Power State: Power State #0 00:16:05.961 Power State #0: 00:16:05.961 Max Power: 0.00 W 00:16:05.961 Non-Operational State: Operational 00:16:05.961 Entry Latency: Not Reported 00:16:05.961 Exit Latency: Not Reported 00:16:05.961 Relative Read Throughput: 0 00:16:05.961 Relative Read Latency: 0 00:16:05.961 Relative Write Throughput: 0 00:16:05.961 Relative Write Latency: 0 00:16:05.961 Idle Power: Not Reported 00:16:05.961 Active Power: Not Reported 00:16:05.961 Non-Operational Permissive Mode: Not Supported 00:16:05.961 00:16:05.961 Health Information 00:16:05.961 ================== 00:16:05.961 Critical Warnings: 00:16:05.961 Available Spare Space: OK 00:16:05.961 Temperature: OK 00:16:05.961 Device Reliability: OK 00:16:05.961 Read Only: No 00:16:05.961 Volatile Memory Backup: OK 00:16:05.961 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:05.961 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:05.961 Available Spare: 0% 00:16:05.961 Available Sp[2024-11-20 14:36:12.888696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:05.961 [2024-11-20 14:36:12.888707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:05.961 [2024-11-20 14:36:12.888726] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:05.961 [2024-11-20 14:36:12.888733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.961 [2024-11-20 14:36:12.888737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.961 [2024-11-20 14:36:12.888742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.961 [2024-11-20 14:36:12.888746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.961 [2024-11-20 14:36:12.889024] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:05.961 [2024-11-20 14:36:12.889031] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:05.961 [2024-11-20 14:36:12.890023] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:05.961 [2024-11-20 14:36:12.890063] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:05.961 [2024-11-20 14:36:12.890068] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:05.961 [2024-11-20 14:36:12.891029] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:05.961 [2024-11-20 14:36:12.891038] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:05.961 [2024-11-20 14:36:12.891087] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:05.961 [2024-11-20 14:36:12.893254] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:05.961 are Threshold: 0% 00:16:05.961 Life Percentage Used: 0% 00:16:05.961 Data Units Read: 0 00:16:05.961 Data Units Written: 0 00:16:05.961 Host Read Commands: 0 00:16:05.961 Host Write Commands: 0 00:16:05.961 Controller Busy Time: 0 minutes 00:16:05.961 Power Cycles: 0 00:16:05.961 Power On Hours: 0 hours 00:16:05.961 Unsafe Shutdowns: 0 00:16:05.961 Unrecoverable Media Errors: 0 00:16:05.961 Lifetime Error Log Entries: 0 00:16:05.961 Warning Temperature Time: 0 minutes 00:16:05.961 Critical Temperature Time: 0 minutes 00:16:05.961 00:16:05.961 Number of Queues 00:16:05.961 ================ 00:16:05.961 Number of I/O Submission Queues: 127 00:16:05.961 Number of I/O Completion Queues: 127 00:16:05.961 00:16:05.961 Active Namespaces 00:16:05.961 ================= 00:16:05.961 Namespace ID:1 00:16:05.961 Error Recovery Timeout: Unlimited 00:16:05.961 Command Set Identifier: NVM (00h) 00:16:05.961 Deallocate: Supported 00:16:05.961 Deallocated/Unwritten Error: Not Supported 00:16:05.961 Deallocated Read Value: Unknown 00:16:05.961 Deallocate in Write Zeroes: Not Supported 00:16:05.961 Deallocated Guard Field: 0xFFFF 00:16:05.961 Flush: Supported 00:16:05.961 Reservation: Supported 00:16:05.961 Namespace Sharing Capabilities: Multiple Controllers 00:16:05.961 Size (in LBAs): 131072 (0GiB) 00:16:05.961 Capacity (in LBAs): 131072 (0GiB) 00:16:05.961 Utilization (in LBAs): 131072 (0GiB) 00:16:05.961 NGUID: C304FCDAFA2F42129ACEA50CE4A32DEA 00:16:05.961 UUID: c304fcda-fa2f-4212-9ace-a50ce4a32dea 00:16:05.961 Thin Provisioning: Not Supported 00:16:05.961 Per-NS Atomic Units: Yes 00:16:05.961 Atomic Boundary Size (Normal): 0 00:16:05.961 Atomic Boundary Size (PFail): 0 00:16:05.961 Atomic Boundary Offset: 0 00:16:05.961 Maximum Single Source Range Length: 65535 00:16:05.961 Maximum Copy Length: 65535 00:16:05.961 Maximum Source Range Count: 1 00:16:05.961 NGUID/EUI64 Never Reused: No 00:16:05.961 Namespace Write Protected: No 00:16:05.961 Number of LBA Formats: 1 00:16:05.961 Current LBA Format: LBA Format #00 00:16:05.961 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:05.961 00:16:05.961 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:06.221 [2024-11-20 14:36:13.063899] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:11.631 Initializing NVMe Controllers 00:16:11.631 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:11.631 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:11.631 Initialization complete. Launching workers. 00:16:11.631 ======================================================== 00:16:11.631 Latency(us) 00:16:11.631 Device Information : IOPS MiB/s Average min max 00:16:11.631 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40043.21 156.42 3196.41 853.82 9300.46 00:16:11.631 ======================================================== 00:16:11.631 Total : 40043.21 156.42 3196.41 853.82 9300.46 00:16:11.631 00:16:11.631 [2024-11-20 14:36:18.086239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:11.632 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:11.632 [2024-11-20 14:36:18.270069] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:16.908 Initializing NVMe Controllers 00:16:16.908 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:16.908 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:16.908 Initialization complete. Launching workers. 00:16:16.908 ======================================================== 00:16:16.908 Latency(us) 00:16:16.908 Device Information : IOPS MiB/s Average min max 00:16:16.908 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16011.14 62.54 7999.99 3991.26 11973.62 00:16:16.908 ======================================================== 00:16:16.908 Total : 16011.14 62.54 7999.99 3991.26 11973.62 00:16:16.908 00:16:16.908 [2024-11-20 14:36:23.312117] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:16.908 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:16.908 [2024-11-20 14:36:23.500938] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:22.184 [2024-11-20 14:36:28.565453] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:22.184 Initializing NVMe Controllers 00:16:22.184 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:22.184 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:22.184 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:22.184 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:22.184 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:22.184 Initialization complete. Launching workers. 00:16:22.184 Starting thread on core 2 00:16:22.184 Starting thread on core 3 00:16:22.184 Starting thread on core 1 00:16:22.184 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:22.184 [2024-11-20 14:36:28.810610] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:25.473 [2024-11-20 14:36:31.864397] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:25.473 Initializing NVMe Controllers 00:16:25.473 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:25.473 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:25.473 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:25.473 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:25.473 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:25.473 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:25.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:25.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:25.473 Initialization complete. Launching workers. 00:16:25.473 Starting thread on core 1 with urgent priority queue 00:16:25.473 Starting thread on core 2 with urgent priority queue 00:16:25.473 Starting thread on core 3 with urgent priority queue 00:16:25.473 Starting thread on core 0 with urgent priority queue 00:16:25.474 SPDK bdev Controller (SPDK1 ) core 0: 12448.33 IO/s 8.03 secs/100000 ios 00:16:25.474 SPDK bdev Controller (SPDK1 ) core 1: 11132.00 IO/s 8.98 secs/100000 ios 00:16:25.474 SPDK bdev Controller (SPDK1 ) core 2: 15173.67 IO/s 6.59 secs/100000 ios 00:16:25.474 SPDK bdev Controller (SPDK1 ) core 3: 12186.33 IO/s 8.21 secs/100000 ios 00:16:25.474 ======================================================== 00:16:25.474 00:16:25.474 14:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:25.474 [2024-11-20 14:36:32.102715] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:25.474 Initializing NVMe Controllers 00:16:25.474 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:25.474 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:25.474 Namespace ID: 1 size: 0GB 00:16:25.474 Initialization complete. 00:16:25.474 INFO: using host memory buffer for IO 00:16:25.474 Hello world! 00:16:25.474 [2024-11-20 14:36:32.136899] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:25.474 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:25.474 [2024-11-20 14:36:32.364583] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:26.413 Initializing NVMe Controllers 00:16:26.413 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:26.413 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:26.413 Initialization complete. Launching workers. 00:16:26.413 submit (in ns) avg, min, max = 5120.6, 2832.5, 4000518.3 00:16:26.413 complete (in ns) avg, min, max = 17659.3, 1640.0, 4005730.0 00:16:26.413 00:16:26.413 Submit histogram 00:16:26.413 ================ 00:16:26.413 Range in us Cumulative Count 00:16:26.413 2.827 - 2.840: 0.0826% ( 17) 00:16:26.413 2.840 - 2.853: 0.5980% ( 106) 00:16:26.413 2.853 - 2.867: 2.6010% ( 412) 00:16:26.413 2.867 - 2.880: 6.3834% ( 778) 00:16:26.413 2.880 - 2.893: 10.5644% ( 860) 00:16:26.413 2.893 - 2.907: 16.1505% ( 1149) 00:16:26.413 2.907 - 2.920: 21.6151% ( 1124) 00:16:26.413 2.920 - 2.933: 27.0261% ( 1113) 00:16:26.413 2.933 - 2.947: 33.0984% ( 1249) 00:16:26.413 2.947 - 2.960: 38.8060% ( 1174) 00:16:26.413 2.960 - 2.973: 46.3513% ( 1552) 00:16:26.413 2.973 - 2.987: 54.0279% ( 1579) 00:16:26.413 2.987 - 3.000: 61.7094% ( 1580) 00:16:26.413 3.000 - 3.013: 70.5771% ( 1824) 00:16:26.413 3.013 - 3.027: 79.3378% ( 1802) 00:16:26.413 3.027 - 3.040: 86.4748% ( 1468) 00:16:26.413 3.040 - 3.053: 91.7060% ( 1076) 00:16:26.413 3.053 - 3.067: 94.4334% ( 561) 00:16:26.413 3.067 - 3.080: 96.1398% ( 351) 00:16:26.413 3.080 - 3.093: 97.5157% ( 283) 00:16:26.413 3.093 - 3.107: 98.5026% ( 203) 00:16:26.413 3.107 - 3.120: 99.0422% ( 111) 00:16:26.413 3.120 - 3.133: 99.2610% ( 45) 00:16:26.413 3.133 - 3.147: 99.4506% ( 39) 00:16:26.413 3.147 - 3.160: 99.5187% ( 14) 00:16:26.413 3.160 - 3.173: 99.5527% ( 7) 00:16:26.413 3.173 - 3.187: 99.5770% ( 5) 00:16:26.413 3.187 - 3.200: 99.5868% ( 2) 00:16:26.413 3.200 - 3.213: 99.5916% ( 1) 00:16:26.413 3.267 - 3.280: 99.5965% ( 1) 00:16:26.413 3.600 - 3.627: 99.6013% ( 1) 00:16:26.413 3.840 - 3.867: 99.6062% ( 1) 00:16:26.413 4.027 - 4.053: 99.6111% ( 1) 00:16:26.413 4.053 - 4.080: 99.6159% ( 1) 00:16:26.413 4.293 - 4.320: 99.6208% ( 1) 00:16:26.413 4.400 - 4.427: 99.6257% ( 1) 00:16:26.413 4.453 - 4.480: 99.6305% ( 1) 00:16:26.413 4.533 - 4.560: 99.6354% ( 1) 00:16:26.413 4.640 - 4.667: 99.6402% ( 1) 00:16:26.413 4.667 - 4.693: 99.6451% ( 1) 00:16:26.413 4.693 - 4.720: 99.6694% ( 5) 00:16:26.413 4.720 - 4.747: 99.6743% ( 1) 00:16:26.413 4.747 - 4.773: 99.6840% ( 2) 00:16:26.413 4.773 - 4.800: 99.6889% ( 1) 00:16:26.413 4.800 - 4.827: 99.6937% ( 1) 00:16:26.413 4.827 - 4.853: 99.6986% ( 1) 00:16:26.413 4.853 - 4.880: 99.7034% ( 1) 00:16:26.413 4.880 - 4.907: 99.7132% ( 2) 00:16:26.413 4.933 - 4.960: 99.7229% ( 2) 00:16:26.413 4.960 - 4.987: 99.7326% ( 2) 00:16:26.413 4.987 - 5.013: 99.7423% ( 2) 00:16:26.413 5.040 - 5.067: 99.7569% ( 3) 00:16:26.413 5.067 - 5.093: 99.7666% ( 2) 00:16:26.413 5.120 - 5.147: 99.7861% ( 4) 00:16:26.413 5.173 - 5.200: 99.7909% ( 1) 00:16:26.413 5.253 - 5.280: 99.8007% ( 2) 00:16:26.413 5.307 - 5.333: 99.8055% ( 1) 00:16:26.413 5.387 - 5.413: 99.8104% ( 1) 00:16:26.413 5.440 - 5.467: 99.8153% ( 1) 00:16:26.413 5.547 - 5.573: 99.8201% ( 1) 00:16:26.413 5.707 - 5.733: 99.8250% ( 1) 00:16:26.413 5.947 - 5.973: 99.8298% ( 1) 00:16:26.413 6.293 - 6.320: 99.8396% ( 2) 00:16:26.413 6.347 - 6.373: 99.8444% ( 1) 00:16:26.413 6.533 - 6.560: 99.8493% ( 1) 00:16:26.413 6.560 - 6.587: 99.8541% ( 1) 00:16:26.413 6.667 - 6.693: 99.8590% ( 1) 00:16:26.413 6.693 - 6.720: 99.8639% ( 1) 00:16:26.413 6.880 - 6.933: 99.8687% ( 1) 00:16:26.413 6.933 - 6.987: 99.8785% ( 2) 00:16:26.413 7.040 - 7.093: 99.8833% ( 1) 00:16:26.413 7.093 - 7.147: 99.8882% ( 1) 00:16:26.413 7.147 - 7.200: 99.8930% ( 1) 00:16:26.413 7.360 - 7.413: 99.8979% ( 1) 00:16:26.413 7.413 - 7.467: 99.9076% ( 2) 00:16:26.413 7.520 - 7.573: 99.9174% ( 2) 00:16:26.413 7.627 - 7.680: 99.9222% ( 1) 00:16:26.413 7.840 - 7.893: 99.9271% ( 1) 00:16:26.413 8.000 - 8.053: 99.9319% ( 1) 00:16:26.413 [2024-11-20 14:36:33.386092] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:26.413 9.333 - 9.387: 99.9368% ( 1) 00:16:26.413 9.653 - 9.707: 99.9465% ( 2) 00:16:26.413 3986.773 - 4014.080: 100.0000% ( 11) 00:16:26.413 00:16:26.413 Complete histogram 00:16:26.413 ================== 00:16:26.413 Range in us Cumulative Count 00:16:26.413 1.640 - 1.647: 0.0729% ( 15) 00:16:26.413 1.647 - 1.653: 0.8605% ( 162) 00:16:26.413 1.653 - 1.660: 0.9869% ( 26) 00:16:26.413 1.660 - 1.667: 1.0550% ( 14) 00:16:26.413 1.667 - 1.673: 1.1571% ( 21) 00:16:26.413 1.673 - 1.680: 1.1911% ( 7) 00:16:26.413 1.680 - 1.687: 1.2203% ( 6) 00:16:26.413 1.687 - 1.693: 1.2251% ( 1) 00:16:26.413 1.693 - 1.700: 1.3321% ( 22) 00:16:26.413 1.700 - 1.707: 19.8016% ( 3799) 00:16:26.413 1.707 - 1.720: 52.5402% ( 6734) 00:16:26.413 1.720 - 1.733: 70.4896% ( 3692) 00:16:26.413 1.733 - 1.747: 80.3588% ( 2030) 00:16:26.413 1.747 - 1.760: 83.0619% ( 556) 00:16:26.413 1.760 - 1.773: 86.2366% ( 653) 00:16:26.413 1.773 - 1.787: 91.7011% ( 1124) 00:16:26.413 1.787 - 1.800: 96.0961% ( 904) 00:16:26.413 1.800 - 1.813: 98.2887% ( 451) 00:16:26.413 1.813 - 1.827: 99.1930% ( 186) 00:16:26.413 1.827 - 1.840: 99.3923% ( 41) 00:16:26.413 1.840 - 1.853: 99.4263% ( 7) 00:16:26.413 1.853 - 1.867: 99.4312% ( 1) 00:16:26.413 3.320 - 3.333: 99.4360% ( 1) 00:16:26.413 3.467 - 3.493: 99.4409% ( 1) 00:16:26.413 3.520 - 3.547: 99.4458% ( 1) 00:16:26.413 3.600 - 3.627: 99.4506% ( 1) 00:16:26.413 3.627 - 3.653: 99.4555% ( 1) 00:16:26.413 3.653 - 3.680: 99.4604% ( 1) 00:16:26.413 3.733 - 3.760: 99.4701% ( 2) 00:16:26.413 3.840 - 3.867: 99.4749% ( 1) 00:16:26.413 3.893 - 3.920: 99.4798% ( 1) 00:16:26.413 3.973 - 4.000: 99.4847% ( 1) 00:16:26.413 4.187 - 4.213: 99.4944% ( 2) 00:16:26.413 4.267 - 4.293: 99.4992% ( 1) 00:16:26.413 4.293 - 4.320: 99.5041% ( 1) 00:16:26.414 4.507 - 4.533: 99.5090% ( 1) 00:16:26.414 4.533 - 4.560: 99.5138% ( 1) 00:16:26.414 4.613 - 4.640: 99.5187% ( 1) 00:16:26.414 4.747 - 4.773: 99.5236% ( 1) 00:16:26.414 5.040 - 5.067: 99.5284% ( 1) 00:16:26.414 5.253 - 5.280: 99.5333% ( 1) 00:16:26.414 5.387 - 5.413: 99.5381% ( 1) 00:16:26.414 5.413 - 5.440: 99.5430% ( 1) 00:16:26.414 5.627 - 5.653: 99.5479% ( 1) 00:16:26.414 5.653 - 5.680: 99.5527% ( 1) 00:16:26.414 5.707 - 5.733: 99.5576% ( 1) 00:16:26.414 5.760 - 5.787: 99.5624% ( 1) 00:16:26.414 5.973 - 6.000: 99.5673% ( 1) 00:16:26.414 6.080 - 6.107: 99.5770% ( 2) 00:16:26.414 6.133 - 6.160: 99.5819% ( 1) 00:16:26.414 6.347 - 6.373: 99.5868% ( 1) 00:16:26.414 6.533 - 6.560: 99.5965% ( 2) 00:16:26.414 6.667 - 6.693: 99.6013% ( 1) 00:16:26.414 3986.773 - 4014.080: 100.0000% ( 82) 00:16:26.414 00:16:26.414 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:26.414 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:26.414 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:26.414 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:26.414 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:26.673 [ 00:16:26.673 { 00:16:26.673 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:26.673 "subtype": "Discovery", 00:16:26.673 "listen_addresses": [], 00:16:26.673 "allow_any_host": true, 00:16:26.673 "hosts": [] 00:16:26.673 }, 00:16:26.673 { 00:16:26.673 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:26.673 "subtype": "NVMe", 00:16:26.673 "listen_addresses": [ 00:16:26.673 { 00:16:26.673 "trtype": "VFIOUSER", 00:16:26.673 "adrfam": "IPv4", 00:16:26.673 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:26.673 "trsvcid": "0" 00:16:26.673 } 00:16:26.673 ], 00:16:26.673 "allow_any_host": true, 00:16:26.673 "hosts": [], 00:16:26.673 "serial_number": "SPDK1", 00:16:26.673 "model_number": "SPDK bdev Controller", 00:16:26.673 "max_namespaces": 32, 00:16:26.673 "min_cntlid": 1, 00:16:26.673 "max_cntlid": 65519, 00:16:26.673 "namespaces": [ 00:16:26.673 { 00:16:26.673 "nsid": 1, 00:16:26.673 "bdev_name": "Malloc1", 00:16:26.673 "name": "Malloc1", 00:16:26.673 "nguid": "C304FCDAFA2F42129ACEA50CE4A32DEA", 00:16:26.673 "uuid": "c304fcda-fa2f-4212-9ace-a50ce4a32dea" 00:16:26.673 } 00:16:26.673 ] 00:16:26.673 }, 00:16:26.673 { 00:16:26.673 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:26.673 "subtype": "NVMe", 00:16:26.673 "listen_addresses": [ 00:16:26.673 { 00:16:26.673 "trtype": "VFIOUSER", 00:16:26.673 "adrfam": "IPv4", 00:16:26.673 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:26.673 "trsvcid": "0" 00:16:26.673 } 00:16:26.673 ], 00:16:26.673 "allow_any_host": true, 00:16:26.673 "hosts": [], 00:16:26.673 "serial_number": "SPDK2", 00:16:26.673 "model_number": "SPDK bdev Controller", 00:16:26.673 "max_namespaces": 32, 00:16:26.673 "min_cntlid": 1, 00:16:26.673 "max_cntlid": 65519, 00:16:26.673 "namespaces": [ 00:16:26.673 { 00:16:26.673 "nsid": 1, 00:16:26.673 "bdev_name": "Malloc2", 00:16:26.673 "name": "Malloc2", 00:16:26.673 "nguid": "C28358D01ADE4D9BBBC14786C90816A5", 00:16:26.673 "uuid": "c28358d0-1ade-4d9b-bbc1-4786c90816a5" 00:16:26.673 } 00:16:26.673 ] 00:16:26.673 } 00:16:26.673 ] 00:16:26.673 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:26.673 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3859804 00:16:26.673 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:26.673 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:26.674 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:26.674 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:26.674 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:26.674 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:26.674 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:26.674 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:26.935 Malloc3 00:16:26.935 [2024-11-20 14:36:33.743626] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:26.935 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:26.935 [2024-11-20 14:36:33.905765] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:26.935 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:26.935 Asynchronous Event Request test 00:16:26.935 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:26.935 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:26.935 Registering asynchronous event callbacks... 00:16:26.935 Starting namespace attribute notice tests for all controllers... 00:16:26.935 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:26.935 aer_cb - Changed Namespace 00:16:26.935 Cleaning up... 00:16:27.195 [ 00:16:27.195 { 00:16:27.195 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:27.195 "subtype": "Discovery", 00:16:27.195 "listen_addresses": [], 00:16:27.195 "allow_any_host": true, 00:16:27.195 "hosts": [] 00:16:27.195 }, 00:16:27.195 { 00:16:27.195 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:27.195 "subtype": "NVMe", 00:16:27.195 "listen_addresses": [ 00:16:27.195 { 00:16:27.195 "trtype": "VFIOUSER", 00:16:27.195 "adrfam": "IPv4", 00:16:27.195 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:27.195 "trsvcid": "0" 00:16:27.195 } 00:16:27.195 ], 00:16:27.195 "allow_any_host": true, 00:16:27.195 "hosts": [], 00:16:27.195 "serial_number": "SPDK1", 00:16:27.195 "model_number": "SPDK bdev Controller", 00:16:27.195 "max_namespaces": 32, 00:16:27.195 "min_cntlid": 1, 00:16:27.195 "max_cntlid": 65519, 00:16:27.195 "namespaces": [ 00:16:27.195 { 00:16:27.195 "nsid": 1, 00:16:27.195 "bdev_name": "Malloc1", 00:16:27.195 "name": "Malloc1", 00:16:27.195 "nguid": "C304FCDAFA2F42129ACEA50CE4A32DEA", 00:16:27.195 "uuid": "c304fcda-fa2f-4212-9ace-a50ce4a32dea" 00:16:27.195 }, 00:16:27.195 { 00:16:27.195 "nsid": 2, 00:16:27.195 "bdev_name": "Malloc3", 00:16:27.195 "name": "Malloc3", 00:16:27.195 "nguid": "3C6EEE55CF974A2D9A2DC3A2F5203232", 00:16:27.195 "uuid": "3c6eee55-cf97-4a2d-9a2d-c3a2f5203232" 00:16:27.195 } 00:16:27.195 ] 00:16:27.195 }, 00:16:27.195 { 00:16:27.195 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:27.195 "subtype": "NVMe", 00:16:27.195 "listen_addresses": [ 00:16:27.195 { 00:16:27.195 "trtype": "VFIOUSER", 00:16:27.195 "adrfam": "IPv4", 00:16:27.195 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:27.195 "trsvcid": "0" 00:16:27.195 } 00:16:27.195 ], 00:16:27.195 "allow_any_host": true, 00:16:27.195 "hosts": [], 00:16:27.195 "serial_number": "SPDK2", 00:16:27.195 "model_number": "SPDK bdev Controller", 00:16:27.195 "max_namespaces": 32, 00:16:27.195 "min_cntlid": 1, 00:16:27.195 "max_cntlid": 65519, 00:16:27.195 "namespaces": [ 00:16:27.195 { 00:16:27.195 "nsid": 1, 00:16:27.195 "bdev_name": "Malloc2", 00:16:27.195 "name": "Malloc2", 00:16:27.195 "nguid": "C28358D01ADE4D9BBBC14786C90816A5", 00:16:27.195 "uuid": "c28358d0-1ade-4d9b-bbc1-4786c90816a5" 00:16:27.195 } 00:16:27.195 ] 00:16:27.195 } 00:16:27.195 ] 00:16:27.195 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3859804 00:16:27.196 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:27.196 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:27.196 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:27.196 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:27.196 [2024-11-20 14:36:34.089552] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:16:27.196 [2024-11-20 14:36:34.089582] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859999 ] 00:16:27.196 [2024-11-20 14:36:34.127492] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:27.196 [2024-11-20 14:36:34.136435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:27.196 [2024-11-20 14:36:34.136455] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe6c4662000 00:16:27.196 [2024-11-20 14:36:34.137435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.196 [2024-11-20 14:36:34.138440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.196 [2024-11-20 14:36:34.139446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.196 [2024-11-20 14:36:34.140450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:27.196 [2024-11-20 14:36:34.141461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:27.196 [2024-11-20 14:36:34.142465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.196 [2024-11-20 14:36:34.143476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:27.196 [2024-11-20 14:36:34.144480] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.196 [2024-11-20 14:36:34.145489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:27.196 [2024-11-20 14:36:34.145497] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe6c4657000 00:16:27.196 [2024-11-20 14:36:34.146410] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:27.196 [2024-11-20 14:36:34.155792] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:27.196 [2024-11-20 14:36:34.155811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:27.196 [2024-11-20 14:36:34.160871] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:27.196 [2024-11-20 14:36:34.160903] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:27.196 [2024-11-20 14:36:34.160961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:27.196 [2024-11-20 14:36:34.160971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:27.196 [2024-11-20 14:36:34.160975] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:27.196 [2024-11-20 14:36:34.161875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:27.196 [2024-11-20 14:36:34.161882] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:27.196 [2024-11-20 14:36:34.161887] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:27.196 [2024-11-20 14:36:34.162885] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:27.196 [2024-11-20 14:36:34.162893] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:27.196 [2024-11-20 14:36:34.162898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:27.196 [2024-11-20 14:36:34.163887] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:27.196 [2024-11-20 14:36:34.163894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:27.196 [2024-11-20 14:36:34.164894] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:27.196 [2024-11-20 14:36:34.164901] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:27.196 [2024-11-20 14:36:34.164905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:27.196 [2024-11-20 14:36:34.164910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:27.196 [2024-11-20 14:36:34.165017] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:27.196 [2024-11-20 14:36:34.165021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:27.196 [2024-11-20 14:36:34.165024] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:27.196 [2024-11-20 14:36:34.165903] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:27.196 [2024-11-20 14:36:34.166903] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:27.196 [2024-11-20 14:36:34.167910] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:27.196 [2024-11-20 14:36:34.168911] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:27.196 [2024-11-20 14:36:34.168945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:27.196 [2024-11-20 14:36:34.169922] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:27.196 [2024-11-20 14:36:34.169928] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:27.196 [2024-11-20 14:36:34.169932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:27.196 [2024-11-20 14:36:34.169947] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:27.196 [2024-11-20 14:36:34.169952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:27.196 [2024-11-20 14:36:34.169961] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:27.196 [2024-11-20 14:36:34.169964] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.196 [2024-11-20 14:36:34.169967] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.196 [2024-11-20 14:36:34.169977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.196 [2024-11-20 14:36:34.177251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:27.196 [2024-11-20 14:36:34.177260] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:27.196 [2024-11-20 14:36:34.177264] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:27.196 [2024-11-20 14:36:34.177267] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:27.196 [2024-11-20 14:36:34.177270] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:27.196 [2024-11-20 14:36:34.177275] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:27.196 [2024-11-20 14:36:34.177278] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:27.196 [2024-11-20 14:36:34.177282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:27.196 [2024-11-20 14:36:34.177288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:27.196 [2024-11-20 14:36:34.177297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:27.196 [2024-11-20 14:36:34.185249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:27.196 [2024-11-20 14:36:34.185259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.196 [2024-11-20 14:36:34.185265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.196 [2024-11-20 14:36:34.185271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.196 [2024-11-20 14:36:34.185277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.196 [2024-11-20 14:36:34.185281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:27.196 [2024-11-20 14:36:34.185286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:27.196 [2024-11-20 14:36:34.185292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:27.196 [2024-11-20 14:36:34.193249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:27.196 [2024-11-20 14:36:34.193257] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:27.196 [2024-11-20 14:36:34.193260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:27.196 [2024-11-20 14:36:34.193265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.193269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.193275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:27.197 [2024-11-20 14:36:34.201250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:27.197 [2024-11-20 14:36:34.201296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.201302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.201307] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:27.197 [2024-11-20 14:36:34.201310] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:27.197 [2024-11-20 14:36:34.201313] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.197 [2024-11-20 14:36:34.201317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:27.197 [2024-11-20 14:36:34.209249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:27.197 [2024-11-20 14:36:34.209257] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:27.197 [2024-11-20 14:36:34.209266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.209275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.209280] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:27.197 [2024-11-20 14:36:34.209283] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.197 [2024-11-20 14:36:34.209285] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.197 [2024-11-20 14:36:34.209290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.197 [2024-11-20 14:36:34.217250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:27.197 [2024-11-20 14:36:34.217262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.217268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.217273] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:27.197 [2024-11-20 14:36:34.217276] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.197 [2024-11-20 14:36:34.217278] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.197 [2024-11-20 14:36:34.217282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.197 [2024-11-20 14:36:34.225252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:27.197 [2024-11-20 14:36:34.225259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.225264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.225270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.225274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.225277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.225281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.225284] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:27.197 [2024-11-20 14:36:34.225288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:27.197 [2024-11-20 14:36:34.225291] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:27.197 [2024-11-20 14:36:34.225303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:27.197 [2024-11-20 14:36:34.233251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:27.197 [2024-11-20 14:36:34.233261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:27.197 [2024-11-20 14:36:34.241251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:27.197 [2024-11-20 14:36:34.241262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:27.197 [2024-11-20 14:36:34.249251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:27.197 [2024-11-20 14:36:34.249262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:27.458 [2024-11-20 14:36:34.257249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:27.458 [2024-11-20 14:36:34.257262] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:27.458 [2024-11-20 14:36:34.257266] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:27.458 [2024-11-20 14:36:34.257268] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:27.458 [2024-11-20 14:36:34.257271] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:27.458 [2024-11-20 14:36:34.257273] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:27.458 [2024-11-20 14:36:34.257278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:27.458 [2024-11-20 14:36:34.257283] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:27.458 [2024-11-20 14:36:34.257286] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:27.458 [2024-11-20 14:36:34.257288] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.458 [2024-11-20 14:36:34.257293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:27.458 [2024-11-20 14:36:34.257298] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:27.458 [2024-11-20 14:36:34.257301] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.458 [2024-11-20 14:36:34.257303] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.458 [2024-11-20 14:36:34.257307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.458 [2024-11-20 14:36:34.257312] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:27.458 [2024-11-20 14:36:34.257315] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:27.458 [2024-11-20 14:36:34.257318] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.458 [2024-11-20 14:36:34.257322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:27.458 [2024-11-20 14:36:34.265251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:27.458 [2024-11-20 14:36:34.265262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:27.458 [2024-11-20 14:36:34.265269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:27.458 [2024-11-20 14:36:34.265274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:27.458 ===================================================== 00:16:27.458 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:27.458 ===================================================== 00:16:27.458 Controller Capabilities/Features 00:16:27.458 ================================ 00:16:27.458 Vendor ID: 4e58 00:16:27.458 Subsystem Vendor ID: 4e58 00:16:27.458 Serial Number: SPDK2 00:16:27.458 Model Number: SPDK bdev Controller 00:16:27.458 Firmware Version: 25.01 00:16:27.458 Recommended Arb Burst: 6 00:16:27.458 IEEE OUI Identifier: 8d 6b 50 00:16:27.458 Multi-path I/O 00:16:27.458 May have multiple subsystem ports: Yes 00:16:27.458 May have multiple controllers: Yes 00:16:27.458 Associated with SR-IOV VF: No 00:16:27.458 Max Data Transfer Size: 131072 00:16:27.458 Max Number of Namespaces: 32 00:16:27.458 Max Number of I/O Queues: 127 00:16:27.458 NVMe Specification Version (VS): 1.3 00:16:27.458 NVMe Specification Version (Identify): 1.3 00:16:27.458 Maximum Queue Entries: 256 00:16:27.458 Contiguous Queues Required: Yes 00:16:27.458 Arbitration Mechanisms Supported 00:16:27.458 Weighted Round Robin: Not Supported 00:16:27.458 Vendor Specific: Not Supported 00:16:27.458 Reset Timeout: 15000 ms 00:16:27.458 Doorbell Stride: 4 bytes 00:16:27.458 NVM Subsystem Reset: Not Supported 00:16:27.458 Command Sets Supported 00:16:27.458 NVM Command Set: Supported 00:16:27.458 Boot Partition: Not Supported 00:16:27.458 Memory Page Size Minimum: 4096 bytes 00:16:27.458 Memory Page Size Maximum: 4096 bytes 00:16:27.458 Persistent Memory Region: Not Supported 00:16:27.458 Optional Asynchronous Events Supported 00:16:27.458 Namespace Attribute Notices: Supported 00:16:27.458 Firmware Activation Notices: Not Supported 00:16:27.458 ANA Change Notices: Not Supported 00:16:27.458 PLE Aggregate Log Change Notices: Not Supported 00:16:27.458 LBA Status Info Alert Notices: Not Supported 00:16:27.458 EGE Aggregate Log Change Notices: Not Supported 00:16:27.458 Normal NVM Subsystem Shutdown event: Not Supported 00:16:27.458 Zone Descriptor Change Notices: Not Supported 00:16:27.458 Discovery Log Change Notices: Not Supported 00:16:27.458 Controller Attributes 00:16:27.458 128-bit Host Identifier: Supported 00:16:27.458 Non-Operational Permissive Mode: Not Supported 00:16:27.458 NVM Sets: Not Supported 00:16:27.458 Read Recovery Levels: Not Supported 00:16:27.458 Endurance Groups: Not Supported 00:16:27.458 Predictable Latency Mode: Not Supported 00:16:27.458 Traffic Based Keep ALive: Not Supported 00:16:27.458 Namespace Granularity: Not Supported 00:16:27.458 SQ Associations: Not Supported 00:16:27.458 UUID List: Not Supported 00:16:27.458 Multi-Domain Subsystem: Not Supported 00:16:27.458 Fixed Capacity Management: Not Supported 00:16:27.458 Variable Capacity Management: Not Supported 00:16:27.458 Delete Endurance Group: Not Supported 00:16:27.458 Delete NVM Set: Not Supported 00:16:27.458 Extended LBA Formats Supported: Not Supported 00:16:27.458 Flexible Data Placement Supported: Not Supported 00:16:27.458 00:16:27.458 Controller Memory Buffer Support 00:16:27.458 ================================ 00:16:27.458 Supported: No 00:16:27.458 00:16:27.458 Persistent Memory Region Support 00:16:27.458 ================================ 00:16:27.458 Supported: No 00:16:27.458 00:16:27.458 Admin Command Set Attributes 00:16:27.458 ============================ 00:16:27.458 Security Send/Receive: Not Supported 00:16:27.458 Format NVM: Not Supported 00:16:27.458 Firmware Activate/Download: Not Supported 00:16:27.458 Namespace Management: Not Supported 00:16:27.458 Device Self-Test: Not Supported 00:16:27.458 Directives: Not Supported 00:16:27.458 NVMe-MI: Not Supported 00:16:27.458 Virtualization Management: Not Supported 00:16:27.458 Doorbell Buffer Config: Not Supported 00:16:27.458 Get LBA Status Capability: Not Supported 00:16:27.458 Command & Feature Lockdown Capability: Not Supported 00:16:27.458 Abort Command Limit: 4 00:16:27.458 Async Event Request Limit: 4 00:16:27.458 Number of Firmware Slots: N/A 00:16:27.458 Firmware Slot 1 Read-Only: N/A 00:16:27.458 Firmware Activation Without Reset: N/A 00:16:27.459 Multiple Update Detection Support: N/A 00:16:27.459 Firmware Update Granularity: No Information Provided 00:16:27.459 Per-Namespace SMART Log: No 00:16:27.459 Asymmetric Namespace Access Log Page: Not Supported 00:16:27.459 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:27.459 Command Effects Log Page: Supported 00:16:27.459 Get Log Page Extended Data: Supported 00:16:27.459 Telemetry Log Pages: Not Supported 00:16:27.459 Persistent Event Log Pages: Not Supported 00:16:27.459 Supported Log Pages Log Page: May Support 00:16:27.459 Commands Supported & Effects Log Page: Not Supported 00:16:27.459 Feature Identifiers & Effects Log Page:May Support 00:16:27.459 NVMe-MI Commands & Effects Log Page: May Support 00:16:27.459 Data Area 4 for Telemetry Log: Not Supported 00:16:27.459 Error Log Page Entries Supported: 128 00:16:27.459 Keep Alive: Supported 00:16:27.459 Keep Alive Granularity: 10000 ms 00:16:27.459 00:16:27.459 NVM Command Set Attributes 00:16:27.459 ========================== 00:16:27.459 Submission Queue Entry Size 00:16:27.459 Max: 64 00:16:27.459 Min: 64 00:16:27.459 Completion Queue Entry Size 00:16:27.459 Max: 16 00:16:27.459 Min: 16 00:16:27.459 Number of Namespaces: 32 00:16:27.459 Compare Command: Supported 00:16:27.459 Write Uncorrectable Command: Not Supported 00:16:27.459 Dataset Management Command: Supported 00:16:27.459 Write Zeroes Command: Supported 00:16:27.459 Set Features Save Field: Not Supported 00:16:27.459 Reservations: Not Supported 00:16:27.459 Timestamp: Not Supported 00:16:27.459 Copy: Supported 00:16:27.459 Volatile Write Cache: Present 00:16:27.459 Atomic Write Unit (Normal): 1 00:16:27.459 Atomic Write Unit (PFail): 1 00:16:27.459 Atomic Compare & Write Unit: 1 00:16:27.459 Fused Compare & Write: Supported 00:16:27.459 Scatter-Gather List 00:16:27.459 SGL Command Set: Supported (Dword aligned) 00:16:27.459 SGL Keyed: Not Supported 00:16:27.459 SGL Bit Bucket Descriptor: Not Supported 00:16:27.459 SGL Metadata Pointer: Not Supported 00:16:27.459 Oversized SGL: Not Supported 00:16:27.459 SGL Metadata Address: Not Supported 00:16:27.459 SGL Offset: Not Supported 00:16:27.459 Transport SGL Data Block: Not Supported 00:16:27.459 Replay Protected Memory Block: Not Supported 00:16:27.459 00:16:27.459 Firmware Slot Information 00:16:27.459 ========================= 00:16:27.459 Active slot: 1 00:16:27.459 Slot 1 Firmware Revision: 25.01 00:16:27.459 00:16:27.459 00:16:27.459 Commands Supported and Effects 00:16:27.459 ============================== 00:16:27.459 Admin Commands 00:16:27.459 -------------- 00:16:27.459 Get Log Page (02h): Supported 00:16:27.459 Identify (06h): Supported 00:16:27.459 Abort (08h): Supported 00:16:27.459 Set Features (09h): Supported 00:16:27.459 Get Features (0Ah): Supported 00:16:27.459 Asynchronous Event Request (0Ch): Supported 00:16:27.459 Keep Alive (18h): Supported 00:16:27.459 I/O Commands 00:16:27.459 ------------ 00:16:27.459 Flush (00h): Supported LBA-Change 00:16:27.459 Write (01h): Supported LBA-Change 00:16:27.459 Read (02h): Supported 00:16:27.459 Compare (05h): Supported 00:16:27.459 Write Zeroes (08h): Supported LBA-Change 00:16:27.459 Dataset Management (09h): Supported LBA-Change 00:16:27.459 Copy (19h): Supported LBA-Change 00:16:27.459 00:16:27.459 Error Log 00:16:27.459 ========= 00:16:27.459 00:16:27.459 Arbitration 00:16:27.459 =========== 00:16:27.459 Arbitration Burst: 1 00:16:27.459 00:16:27.459 Power Management 00:16:27.459 ================ 00:16:27.459 Number of Power States: 1 00:16:27.459 Current Power State: Power State #0 00:16:27.459 Power State #0: 00:16:27.459 Max Power: 0.00 W 00:16:27.459 Non-Operational State: Operational 00:16:27.459 Entry Latency: Not Reported 00:16:27.459 Exit Latency: Not Reported 00:16:27.459 Relative Read Throughput: 0 00:16:27.459 Relative Read Latency: 0 00:16:27.459 Relative Write Throughput: 0 00:16:27.459 Relative Write Latency: 0 00:16:27.459 Idle Power: Not Reported 00:16:27.459 Active Power: Not Reported 00:16:27.459 Non-Operational Permissive Mode: Not Supported 00:16:27.459 00:16:27.459 Health Information 00:16:27.459 ================== 00:16:27.459 Critical Warnings: 00:16:27.459 Available Spare Space: OK 00:16:27.459 Temperature: OK 00:16:27.459 Device Reliability: OK 00:16:27.459 Read Only: No 00:16:27.459 Volatile Memory Backup: OK 00:16:27.459 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:27.459 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:27.459 Available Spare: 0% 00:16:27.459 Available Sp[2024-11-20 14:36:34.265348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:27.459 [2024-11-20 14:36:34.273250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:27.459 [2024-11-20 14:36:34.273274] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:27.459 [2024-11-20 14:36:34.273281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.459 [2024-11-20 14:36:34.273285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.459 [2024-11-20 14:36:34.273290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.459 [2024-11-20 14:36:34.273294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.459 [2024-11-20 14:36:34.273327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:27.459 [2024-11-20 14:36:34.273335] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:27.459 [2024-11-20 14:36:34.274331] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:27.459 [2024-11-20 14:36:34.274367] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:27.459 [2024-11-20 14:36:34.274372] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:27.459 [2024-11-20 14:36:34.275339] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:27.459 [2024-11-20 14:36:34.275348] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:27.459 [2024-11-20 14:36:34.275389] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:27.459 [2024-11-20 14:36:34.276358] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:27.459 are Threshold: 0% 00:16:27.459 Life Percentage Used: 0% 00:16:27.459 Data Units Read: 0 00:16:27.459 Data Units Written: 0 00:16:27.459 Host Read Commands: 0 00:16:27.459 Host Write Commands: 0 00:16:27.459 Controller Busy Time: 0 minutes 00:16:27.459 Power Cycles: 0 00:16:27.459 Power On Hours: 0 hours 00:16:27.459 Unsafe Shutdowns: 0 00:16:27.459 Unrecoverable Media Errors: 0 00:16:27.459 Lifetime Error Log Entries: 0 00:16:27.459 Warning Temperature Time: 0 minutes 00:16:27.459 Critical Temperature Time: 0 minutes 00:16:27.459 00:16:27.459 Number of Queues 00:16:27.459 ================ 00:16:27.459 Number of I/O Submission Queues: 127 00:16:27.459 Number of I/O Completion Queues: 127 00:16:27.459 00:16:27.459 Active Namespaces 00:16:27.459 ================= 00:16:27.459 Namespace ID:1 00:16:27.459 Error Recovery Timeout: Unlimited 00:16:27.459 Command Set Identifier: NVM (00h) 00:16:27.459 Deallocate: Supported 00:16:27.459 Deallocated/Unwritten Error: Not Supported 00:16:27.459 Deallocated Read Value: Unknown 00:16:27.459 Deallocate in Write Zeroes: Not Supported 00:16:27.459 Deallocated Guard Field: 0xFFFF 00:16:27.459 Flush: Supported 00:16:27.459 Reservation: Supported 00:16:27.459 Namespace Sharing Capabilities: Multiple Controllers 00:16:27.459 Size (in LBAs): 131072 (0GiB) 00:16:27.459 Capacity (in LBAs): 131072 (0GiB) 00:16:27.459 Utilization (in LBAs): 131072 (0GiB) 00:16:27.459 NGUID: C28358D01ADE4D9BBBC14786C90816A5 00:16:27.459 UUID: c28358d0-1ade-4d9b-bbc1-4786c90816a5 00:16:27.459 Thin Provisioning: Not Supported 00:16:27.459 Per-NS Atomic Units: Yes 00:16:27.459 Atomic Boundary Size (Normal): 0 00:16:27.459 Atomic Boundary Size (PFail): 0 00:16:27.459 Atomic Boundary Offset: 0 00:16:27.459 Maximum Single Source Range Length: 65535 00:16:27.459 Maximum Copy Length: 65535 00:16:27.459 Maximum Source Range Count: 1 00:16:27.459 NGUID/EUI64 Never Reused: No 00:16:27.459 Namespace Write Protected: No 00:16:27.459 Number of LBA Formats: 1 00:16:27.459 Current LBA Format: LBA Format #00 00:16:27.459 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:27.459 00:16:27.459 14:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:27.460 [2024-11-20 14:36:34.445236] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:32.738 Initializing NVMe Controllers 00:16:32.738 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:32.738 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:32.738 Initialization complete. Launching workers. 00:16:32.738 ======================================================== 00:16:32.738 Latency(us) 00:16:32.738 Device Information : IOPS MiB/s Average min max 00:16:32.738 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40051.60 156.45 3198.25 850.36 6918.76 00:16:32.738 ======================================================== 00:16:32.738 Total : 40051.60 156.45 3198.25 850.36 6918.76 00:16:32.738 00:16:32.738 [2024-11-20 14:36:39.555443] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:32.738 14:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:32.738 [2024-11-20 14:36:39.730022] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:38.009 Initializing NVMe Controllers 00:16:38.009 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:38.009 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:38.009 Initialization complete. Launching workers. 00:16:38.009 ======================================================== 00:16:38.009 Latency(us) 00:16:38.009 Device Information : IOPS MiB/s Average min max 00:16:38.009 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40070.79 156.53 3194.22 862.68 9322.86 00:16:38.009 ======================================================== 00:16:38.009 Total : 40070.79 156.53 3194.22 862.68 9322.86 00:16:38.009 00:16:38.009 [2024-11-20 14:36:44.751686] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:38.009 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:38.009 [2024-11-20 14:36:44.950888] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:43.281 [2024-11-20 14:36:50.086337] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:43.281 Initializing NVMe Controllers 00:16:43.281 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:43.281 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:43.281 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:43.281 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:43.281 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:43.281 Initialization complete. Launching workers. 00:16:43.281 Starting thread on core 2 00:16:43.281 Starting thread on core 3 00:16:43.281 Starting thread on core 1 00:16:43.281 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:43.281 [2024-11-20 14:36:50.334595] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:47.470 [2024-11-20 14:36:54.041366] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:47.470 Initializing NVMe Controllers 00:16:47.470 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:47.470 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:47.470 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:47.470 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:47.470 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:47.470 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:47.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:47.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:47.470 Initialization complete. Launching workers. 00:16:47.470 Starting thread on core 1 with urgent priority queue 00:16:47.470 Starting thread on core 2 with urgent priority queue 00:16:47.470 Starting thread on core 3 with urgent priority queue 00:16:47.470 Starting thread on core 0 with urgent priority queue 00:16:47.470 SPDK bdev Controller (SPDK2 ) core 0: 7182.00 IO/s 13.92 secs/100000 ios 00:16:47.470 SPDK bdev Controller (SPDK2 ) core 1: 6138.33 IO/s 16.29 secs/100000 ios 00:16:47.470 SPDK bdev Controller (SPDK2 ) core 2: 14039.33 IO/s 7.12 secs/100000 ios 00:16:47.470 SPDK bdev Controller (SPDK2 ) core 3: 11812.33 IO/s 8.47 secs/100000 ios 00:16:47.470 ======================================================== 00:16:47.470 00:16:47.470 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:47.470 [2024-11-20 14:36:54.261638] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:47.470 Initializing NVMe Controllers 00:16:47.470 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:47.470 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:47.470 Namespace ID: 1 size: 0GB 00:16:47.470 Initialization complete. 00:16:47.470 INFO: using host memory buffer for IO 00:16:47.470 Hello world! 00:16:47.470 [2024-11-20 14:36:54.271685] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:47.470 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:47.470 [2024-11-20 14:36:54.498628] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:48.846 Initializing NVMe Controllers 00:16:48.846 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:48.846 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:48.846 Initialization complete. Launching workers. 00:16:48.846 submit (in ns) avg, min, max = 5881.2, 2815.0, 3998235.0 00:16:48.846 complete (in ns) avg, min, max = 15836.0, 1634.2, 4000701.7 00:16:48.846 00:16:48.846 Submit histogram 00:16:48.846 ================ 00:16:48.846 Range in us Cumulative Count 00:16:48.846 2.813 - 2.827: 0.3738% ( 77) 00:16:48.846 2.827 - 2.840: 1.0050% ( 130) 00:16:48.846 2.840 - 2.853: 2.5586% ( 320) 00:16:48.846 2.853 - 2.867: 6.2485% ( 760) 00:16:48.846 2.867 - 2.880: 10.0354% ( 780) 00:16:48.846 2.880 - 2.893: 14.7206% ( 965) 00:16:48.846 2.893 - 2.907: 20.1243% ( 1113) 00:16:48.846 2.907 - 2.920: 25.8290% ( 1175) 00:16:48.846 2.920 - 2.933: 32.0775% ( 1287) 00:16:48.846 2.933 - 2.947: 38.1949% ( 1260) 00:16:48.846 2.947 - 2.960: 44.5356% ( 1306) 00:16:48.846 2.960 - 2.973: 51.5900% ( 1453) 00:16:48.846 2.973 - 2.987: 58.7998% ( 1485) 00:16:48.846 2.987 - 3.000: 67.1943% ( 1729) 00:16:48.846 3.000 - 3.013: 75.8314% ( 1779) 00:16:48.846 3.013 - 3.027: 83.4782% ( 1575) 00:16:48.846 3.027 - 3.040: 89.8092% ( 1304) 00:16:48.846 3.040 - 3.053: 94.0283% ( 869) 00:16:48.846 3.053 - 3.067: 96.6112% ( 532) 00:16:48.846 3.067 - 3.080: 98.0191% ( 290) 00:16:48.846 3.080 - 3.093: 98.8542% ( 172) 00:16:48.846 3.093 - 3.107: 99.2766% ( 87) 00:16:48.846 3.107 - 3.120: 99.4368% ( 33) 00:16:48.846 3.120 - 3.133: 99.5193% ( 17) 00:16:48.846 3.133 - 3.147: 99.5485% ( 6) 00:16:48.846 3.147 - 3.160: 99.5679% ( 4) 00:16:48.846 3.173 - 3.187: 99.5728% ( 1) 00:16:48.846 3.253 - 3.267: 99.5776% ( 1) 00:16:48.846 3.373 - 3.387: 99.5825% ( 1) 00:16:48.846 3.413 - 3.440: 99.5873% ( 1) 00:16:48.846 3.573 - 3.600: 99.5922% ( 1) 00:16:48.846 3.627 - 3.653: 99.5970% ( 1) 00:16:48.846 3.680 - 3.707: 99.6019% ( 1) 00:16:48.846 3.707 - 3.733: 99.6067% ( 1) 00:16:48.846 3.760 - 3.787: 99.6116% ( 1) 00:16:48.846 3.787 - 3.813: 99.6164% ( 1) 00:16:48.846 3.973 - 4.000: 99.6262% ( 2) 00:16:48.846 4.320 - 4.347: 99.6310% ( 1) 00:16:48.846 4.533 - 4.560: 99.6456% ( 3) 00:16:48.846 4.560 - 4.587: 99.6504% ( 1) 00:16:48.846 4.587 - 4.613: 99.6601% ( 2) 00:16:48.846 4.613 - 4.640: 99.6747% ( 3) 00:16:48.846 4.667 - 4.693: 99.6796% ( 1) 00:16:48.846 4.693 - 4.720: 99.6990% ( 4) 00:16:48.846 4.720 - 4.747: 99.7087% ( 2) 00:16:48.846 4.747 - 4.773: 99.7136% ( 1) 00:16:48.846 4.773 - 4.800: 99.7233% ( 2) 00:16:48.846 4.800 - 4.827: 99.7330% ( 2) 00:16:48.846 4.853 - 4.880: 99.7378% ( 1) 00:16:48.846 4.907 - 4.933: 99.7427% ( 1) 00:16:48.846 4.960 - 4.987: 99.7524% ( 2) 00:16:48.846 4.987 - 5.013: 99.7621% ( 2) 00:16:48.846 5.013 - 5.040: 99.7718% ( 2) 00:16:48.846 5.040 - 5.067: 99.7864% ( 3) 00:16:48.846 5.067 - 5.093: 99.7961% ( 2) 00:16:48.846 5.120 - 5.147: 99.8058% ( 2) 00:16:48.846 5.147 - 5.173: 99.8107% ( 1) 00:16:48.846 5.440 - 5.467: 99.8155% ( 1) 00:16:48.846 5.493 - 5.520: 99.8204% ( 1) 00:16:48.846 5.627 - 5.653: 99.8252% ( 1) 00:16:48.846 5.707 - 5.733: 99.8301% ( 1) 00:16:48.846 5.787 - 5.813: 99.8349% ( 1) 00:16:48.846 5.813 - 5.840: 99.8398% ( 1) 00:16:48.846 5.867 - 5.893: 99.8446% ( 1) 00:16:48.846 5.920 - 5.947: 99.8495% ( 1) 00:16:48.846 6.000 - 6.027: 99.8543% ( 1) 00:16:48.846 6.080 - 6.107: 99.8592% ( 1) 00:16:48.846 6.160 - 6.187: 99.8689% ( 2) 00:16:48.846 6.213 - 6.240: 99.8738% ( 1) 00:16:48.846 6.240 - 6.267: 99.8786% ( 1) 00:16:48.846 6.267 - 6.293: 99.8883% ( 2) 00:16:48.846 6.373 - 6.400: 99.8932% ( 1) 00:16:48.847 6.480 - 6.507: 99.8980% ( 1) 00:16:48.847 6.987 - 7.040: 99.9029% ( 1) 00:16:48.847 7.040 - 7.093: 99.9078% ( 1) 00:16:48.847 7.093 - 7.147: 99.9126% ( 1) 00:16:48.847 7.840 - 7.893: 99.9175% ( 1) 00:16:48.847 10.240 - 10.293: 99.9223% ( 1) 00:16:48.847 10.347 - 10.400: 99.9272% ( 1) 00:16:48.847 [2024-11-20 14:36:55.592766] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:48.847 3986.773 - 4014.080: 100.0000% ( 15) 00:16:48.847 00:16:48.847 Complete histogram 00:16:48.847 ================== 00:16:48.847 Range in us Cumulative Count 00:16:48.847 1.633 - 1.640: 0.6603% ( 136) 00:16:48.847 1.640 - 1.647: 1.1070% ( 92) 00:16:48.847 1.647 - 1.653: 1.1798% ( 15) 00:16:48.847 1.653 - 1.660: 1.4274% ( 51) 00:16:48.847 1.660 - 1.667: 1.5536% ( 26) 00:16:48.847 1.667 - 1.673: 20.8962% ( 3984) 00:16:48.847 1.673 - 1.680: 46.9486% ( 5366) 00:16:48.847 1.680 - 1.687: 53.1242% ( 1272) 00:16:48.847 1.687 - 1.693: 65.7863% ( 2608) 00:16:48.847 1.693 - 1.700: 73.0689% ( 1500) 00:16:48.847 1.700 - 1.707: 78.7105% ( 1162) 00:16:48.847 1.707 - 1.720: 83.0801% ( 900) 00:16:48.847 1.720 - 1.733: 84.5657% ( 306) 00:16:48.847 1.733 - 1.747: 88.9498% ( 903) 00:16:48.847 1.747 - 1.760: 94.4361% ( 1130) 00:16:48.847 1.760 - 1.773: 97.4754% ( 626) 00:16:48.847 1.773 - 1.787: 98.9367% ( 301) 00:16:48.847 1.787 - 1.800: 99.4271% ( 101) 00:16:48.847 1.800 - 1.813: 99.4757% ( 10) 00:16:48.847 1.813 - 1.827: 99.4805% ( 1) 00:16:48.847 3.267 - 3.280: 99.4854% ( 1) 00:16:48.847 3.307 - 3.320: 99.4902% ( 1) 00:16:48.847 3.347 - 3.360: 99.4951% ( 1) 00:16:48.847 3.653 - 3.680: 99.4999% ( 1) 00:16:48.847 3.760 - 3.787: 99.5048% ( 1) 00:16:48.847 3.840 - 3.867: 99.5096% ( 1) 00:16:48.847 3.893 - 3.920: 99.5145% ( 1) 00:16:48.847 3.947 - 3.973: 99.5193% ( 1) 00:16:48.847 4.053 - 4.080: 99.5242% ( 1) 00:16:48.847 4.187 - 4.213: 99.5291% ( 1) 00:16:48.847 4.240 - 4.267: 99.5388% ( 2) 00:16:48.847 4.293 - 4.320: 99.5436% ( 1) 00:16:48.847 4.320 - 4.347: 99.5485% ( 1) 00:16:48.847 4.347 - 4.373: 99.5533% ( 1) 00:16:48.847 4.453 - 4.480: 99.5582% ( 1) 00:16:48.847 4.480 - 4.507: 99.5630% ( 1) 00:16:48.847 4.533 - 4.560: 99.5728% ( 2) 00:16:48.847 4.613 - 4.640: 99.5776% ( 1) 00:16:48.847 4.667 - 4.693: 99.5825% ( 1) 00:16:48.847 4.693 - 4.720: 99.5873% ( 1) 00:16:48.847 4.773 - 4.800: 99.5970% ( 2) 00:16:48.847 4.933 - 4.960: 99.6019% ( 1) 00:16:48.847 5.013 - 5.040: 99.6067% ( 1) 00:16:48.847 5.280 - 5.307: 99.6116% ( 1) 00:16:48.847 5.333 - 5.360: 99.6164% ( 1) 00:16:48.847 5.520 - 5.547: 99.6213% ( 1) 00:16:48.847 5.707 - 5.733: 99.6262% ( 1) 00:16:48.847 5.867 - 5.893: 99.6310% ( 1) 00:16:48.847 6.347 - 6.373: 99.6359% ( 1) 00:16:48.847 7.040 - 7.093: 99.6407% ( 1) 00:16:48.847 9.600 - 9.653: 99.6456% ( 1) 00:16:48.847 3604.480 - 3631.787: 99.6504% ( 1) 00:16:48.847 3986.773 - 4014.080: 100.0000% ( 72) 00:16:48.847 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:48.847 [ 00:16:48.847 { 00:16:48.847 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:48.847 "subtype": "Discovery", 00:16:48.847 "listen_addresses": [], 00:16:48.847 "allow_any_host": true, 00:16:48.847 "hosts": [] 00:16:48.847 }, 00:16:48.847 { 00:16:48.847 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:48.847 "subtype": "NVMe", 00:16:48.847 "listen_addresses": [ 00:16:48.847 { 00:16:48.847 "trtype": "VFIOUSER", 00:16:48.847 "adrfam": "IPv4", 00:16:48.847 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:48.847 "trsvcid": "0" 00:16:48.847 } 00:16:48.847 ], 00:16:48.847 "allow_any_host": true, 00:16:48.847 "hosts": [], 00:16:48.847 "serial_number": "SPDK1", 00:16:48.847 "model_number": "SPDK bdev Controller", 00:16:48.847 "max_namespaces": 32, 00:16:48.847 "min_cntlid": 1, 00:16:48.847 "max_cntlid": 65519, 00:16:48.847 "namespaces": [ 00:16:48.847 { 00:16:48.847 "nsid": 1, 00:16:48.847 "bdev_name": "Malloc1", 00:16:48.847 "name": "Malloc1", 00:16:48.847 "nguid": "C304FCDAFA2F42129ACEA50CE4A32DEA", 00:16:48.847 "uuid": "c304fcda-fa2f-4212-9ace-a50ce4a32dea" 00:16:48.847 }, 00:16:48.847 { 00:16:48.847 "nsid": 2, 00:16:48.847 "bdev_name": "Malloc3", 00:16:48.847 "name": "Malloc3", 00:16:48.847 "nguid": "3C6EEE55CF974A2D9A2DC3A2F5203232", 00:16:48.847 "uuid": "3c6eee55-cf97-4a2d-9a2d-c3a2f5203232" 00:16:48.847 } 00:16:48.847 ] 00:16:48.847 }, 00:16:48.847 { 00:16:48.847 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:48.847 "subtype": "NVMe", 00:16:48.847 "listen_addresses": [ 00:16:48.847 { 00:16:48.847 "trtype": "VFIOUSER", 00:16:48.847 "adrfam": "IPv4", 00:16:48.847 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:48.847 "trsvcid": "0" 00:16:48.847 } 00:16:48.847 ], 00:16:48.847 "allow_any_host": true, 00:16:48.847 "hosts": [], 00:16:48.847 "serial_number": "SPDK2", 00:16:48.847 "model_number": "SPDK bdev Controller", 00:16:48.847 "max_namespaces": 32, 00:16:48.847 "min_cntlid": 1, 00:16:48.847 "max_cntlid": 65519, 00:16:48.847 "namespaces": [ 00:16:48.847 { 00:16:48.847 "nsid": 1, 00:16:48.847 "bdev_name": "Malloc2", 00:16:48.847 "name": "Malloc2", 00:16:48.847 "nguid": "C28358D01ADE4D9BBBC14786C90816A5", 00:16:48.847 "uuid": "c28358d0-1ade-4d9b-bbc1-4786c90816a5" 00:16:48.847 } 00:16:48.847 ] 00:16:48.847 } 00:16:48.847 ] 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3864610 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:48.847 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:49.106 [2024-11-20 14:36:55.941240] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:49.106 Malloc4 00:16:49.106 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:49.106 [2024-11-20 14:36:56.112442] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.106 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:49.106 Asynchronous Event Request test 00:16:49.106 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.106 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.106 Registering asynchronous event callbacks... 00:16:49.106 Starting namespace attribute notice tests for all controllers... 00:16:49.106 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:49.106 aer_cb - Changed Namespace 00:16:49.106 Cleaning up... 00:16:49.364 [ 00:16:49.364 { 00:16:49.365 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:49.365 "subtype": "Discovery", 00:16:49.365 "listen_addresses": [], 00:16:49.365 "allow_any_host": true, 00:16:49.365 "hosts": [] 00:16:49.365 }, 00:16:49.365 { 00:16:49.365 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:49.365 "subtype": "NVMe", 00:16:49.365 "listen_addresses": [ 00:16:49.365 { 00:16:49.365 "trtype": "VFIOUSER", 00:16:49.365 "adrfam": "IPv4", 00:16:49.365 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:49.365 "trsvcid": "0" 00:16:49.365 } 00:16:49.365 ], 00:16:49.365 "allow_any_host": true, 00:16:49.365 "hosts": [], 00:16:49.365 "serial_number": "SPDK1", 00:16:49.365 "model_number": "SPDK bdev Controller", 00:16:49.365 "max_namespaces": 32, 00:16:49.365 "min_cntlid": 1, 00:16:49.365 "max_cntlid": 65519, 00:16:49.365 "namespaces": [ 00:16:49.365 { 00:16:49.365 "nsid": 1, 00:16:49.365 "bdev_name": "Malloc1", 00:16:49.365 "name": "Malloc1", 00:16:49.365 "nguid": "C304FCDAFA2F42129ACEA50CE4A32DEA", 00:16:49.365 "uuid": "c304fcda-fa2f-4212-9ace-a50ce4a32dea" 00:16:49.365 }, 00:16:49.365 { 00:16:49.365 "nsid": 2, 00:16:49.365 "bdev_name": "Malloc3", 00:16:49.365 "name": "Malloc3", 00:16:49.365 "nguid": "3C6EEE55CF974A2D9A2DC3A2F5203232", 00:16:49.365 "uuid": "3c6eee55-cf97-4a2d-9a2d-c3a2f5203232" 00:16:49.365 } 00:16:49.365 ] 00:16:49.365 }, 00:16:49.365 { 00:16:49.365 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:49.365 "subtype": "NVMe", 00:16:49.365 "listen_addresses": [ 00:16:49.365 { 00:16:49.365 "trtype": "VFIOUSER", 00:16:49.365 "adrfam": "IPv4", 00:16:49.365 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:49.365 "trsvcid": "0" 00:16:49.365 } 00:16:49.365 ], 00:16:49.365 "allow_any_host": true, 00:16:49.365 "hosts": [], 00:16:49.365 "serial_number": "SPDK2", 00:16:49.365 "model_number": "SPDK bdev Controller", 00:16:49.365 "max_namespaces": 32, 00:16:49.365 "min_cntlid": 1, 00:16:49.365 "max_cntlid": 65519, 00:16:49.365 "namespaces": [ 00:16:49.365 { 00:16:49.365 "nsid": 1, 00:16:49.365 "bdev_name": "Malloc2", 00:16:49.365 "name": "Malloc2", 00:16:49.365 "nguid": "C28358D01ADE4D9BBBC14786C90816A5", 00:16:49.365 "uuid": "c28358d0-1ade-4d9b-bbc1-4786c90816a5" 00:16:49.365 }, 00:16:49.365 { 00:16:49.365 "nsid": 2, 00:16:49.365 "bdev_name": "Malloc4", 00:16:49.365 "name": "Malloc4", 00:16:49.365 "nguid": "29872AE9294741DC8FAB94A69E5732A3", 00:16:49.365 "uuid": "29872ae9-2947-41dc-8fab-94a69e5732a3" 00:16:49.365 } 00:16:49.365 ] 00:16:49.365 } 00:16:49.365 ] 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3864610 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3854760 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3854760 ']' 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3854760 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3854760 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3854760' 00:16:49.365 killing process with pid 3854760 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3854760 00:16:49.365 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3854760 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3864822 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3864822' 00:16:49.624 Process pid: 3864822 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3864822 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3864822 ']' 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:49.624 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:49.624 [2024-11-20 14:36:56.523725] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:49.624 [2024-11-20 14:36:56.524655] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:16:49.624 [2024-11-20 14:36:56.524694] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.624 [2024-11-20 14:36:56.590469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:49.624 [2024-11-20 14:36:56.619483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.624 [2024-11-20 14:36:56.619512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.624 [2024-11-20 14:36:56.619518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.624 [2024-11-20 14:36:56.619522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.624 [2024-11-20 14:36:56.619527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.624 [2024-11-20 14:36:56.620731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.624 [2024-11-20 14:36:56.620884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.624 [2024-11-20 14:36:56.621033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.624 [2024-11-20 14:36:56.621035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.624 [2024-11-20 14:36:56.672504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:49.624 [2024-11-20 14:36:56.672804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:49.624 [2024-11-20 14:36:56.673551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:49.624 [2024-11-20 14:36:56.673973] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:49.624 [2024-11-20 14:36:56.674177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:49.883 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.883 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:49.883 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:50.820 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:50.820 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:50.820 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:50.820 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:50.820 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:50.820 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:51.078 Malloc1 00:16:51.078 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:51.336 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:51.336 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:51.593 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:51.593 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:51.593 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:51.852 Malloc2 00:16:51.852 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:51.852 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:52.111 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3864822 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3864822 ']' 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3864822 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3864822 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3864822' 00:16:52.370 killing process with pid 3864822 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3864822 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3864822 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:52.370 00:16:52.370 real 0m49.402s 00:16:52.370 user 3m11.588s 00:16:52.370 sys 0m2.295s 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:52.370 ************************************ 00:16:52.370 END TEST nvmf_vfio_user 00:16:52.370 ************************************ 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:52.370 ************************************ 00:16:52.370 START TEST nvmf_vfio_user_nvme_compliance 00:16:52.370 ************************************ 00:16:52.370 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:52.630 * Looking for test storage... 00:16:52.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.630 --rc genhtml_branch_coverage=1 00:16:52.630 --rc genhtml_function_coverage=1 00:16:52.630 --rc genhtml_legend=1 00:16:52.630 --rc geninfo_all_blocks=1 00:16:52.630 --rc geninfo_unexecuted_blocks=1 00:16:52.630 00:16:52.630 ' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.630 --rc genhtml_branch_coverage=1 00:16:52.630 --rc genhtml_function_coverage=1 00:16:52.630 --rc genhtml_legend=1 00:16:52.630 --rc geninfo_all_blocks=1 00:16:52.630 --rc geninfo_unexecuted_blocks=1 00:16:52.630 00:16:52.630 ' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.630 --rc genhtml_branch_coverage=1 00:16:52.630 --rc genhtml_function_coverage=1 00:16:52.630 --rc genhtml_legend=1 00:16:52.630 --rc geninfo_all_blocks=1 00:16:52.630 --rc geninfo_unexecuted_blocks=1 00:16:52.630 00:16:52.630 ' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.630 --rc genhtml_branch_coverage=1 00:16:52.630 --rc genhtml_function_coverage=1 00:16:52.630 --rc genhtml_legend=1 00:16:52.630 --rc geninfo_all_blocks=1 00:16:52.630 --rc geninfo_unexecuted_blocks=1 00:16:52.630 00:16:52.630 ' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3865570 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3865570' 00:16:52.630 Process pid: 3865570 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3865570 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3865570 ']' 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.630 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:52.631 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:52.631 [2024-11-20 14:36:59.568262] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:16:52.631 [2024-11-20 14:36:59.568316] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.631 [2024-11-20 14:36:59.637095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.631 [2024-11-20 14:36:59.666579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.631 [2024-11-20 14:36:59.666609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.631 [2024-11-20 14:36:59.666615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.631 [2024-11-20 14:36:59.666620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.631 [2024-11-20 14:36:59.666624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.631 [2024-11-20 14:36:59.667653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.631 [2024-11-20 14:36:59.667748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.631 [2024-11-20 14:36:59.667750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.890 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.890 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:52.890 14:36:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:53.826 malloc0 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.826 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:54.085 00:16:54.085 00:16:54.085 CUnit - A unit testing framework for C - Version 2.1-3 00:16:54.085 http://cunit.sourceforge.net/ 00:16:54.085 00:16:54.085 00:16:54.085 Suite: nvme_compliance 00:16:54.085 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 14:37:00.957631] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.085 [2024-11-20 14:37:00.958921] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:54.085 [2024-11-20 14:37:00.958933] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:54.085 [2024-11-20 14:37:00.958938] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:54.085 [2024-11-20 14:37:00.960647] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.085 passed 00:16:54.085 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 14:37:01.037137] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.085 [2024-11-20 14:37:01.040154] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.085 passed 00:16:54.085 Test: admin_identify_ns ...[2024-11-20 14:37:01.117599] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.345 [2024-11-20 14:37:01.180254] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:54.345 [2024-11-20 14:37:01.188255] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:54.345 [2024-11-20 14:37:01.209338] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.345 passed 00:16:54.345 Test: admin_get_features_mandatory_features ...[2024-11-20 14:37:01.281547] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.345 [2024-11-20 14:37:01.285572] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.345 passed 00:16:54.345 Test: admin_get_features_optional_features ...[2024-11-20 14:37:01.361022] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.345 [2024-11-20 14:37:01.364041] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.345 passed 00:16:54.604 Test: admin_set_features_number_of_queues ...[2024-11-20 14:37:01.439786] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.604 [2024-11-20 14:37:01.544333] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.604 passed 00:16:54.604 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 14:37:01.617531] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.604 [2024-11-20 14:37:01.620546] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.604 passed 00:16:54.864 Test: admin_get_log_page_with_lpo ...[2024-11-20 14:37:01.697263] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.864 [2024-11-20 14:37:01.767253] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:54.864 [2024-11-20 14:37:01.780294] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.864 passed 00:16:54.864 Test: fabric_property_get ...[2024-11-20 14:37:01.853493] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:54.864 [2024-11-20 14:37:01.854697] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:54.864 [2024-11-20 14:37:01.856518] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:54.864 passed 00:16:55.122 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 14:37:01.932989] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.122 [2024-11-20 14:37:01.934184] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:55.122 [2024-11-20 14:37:01.936010] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.122 passed 00:16:55.122 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 14:37:02.011749] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.122 [2024-11-20 14:37:02.096250] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:55.122 [2024-11-20 14:37:02.112254] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:55.122 [2024-11-20 14:37:02.117325] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.122 passed 00:16:55.382 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 14:37:02.190516] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.382 [2024-11-20 14:37:02.191718] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:55.382 [2024-11-20 14:37:02.193534] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.382 passed 00:16:55.382 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 14:37:02.268594] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.382 [2024-11-20 14:37:02.348248] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:55.382 [2024-11-20 14:37:02.372250] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:55.382 [2024-11-20 14:37:02.377321] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.382 passed 00:16:55.641 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 14:37:02.449510] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.641 [2024-11-20 14:37:02.450707] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:55.641 [2024-11-20 14:37:02.450727] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:55.641 [2024-11-20 14:37:02.453541] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.641 passed 00:16:55.641 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 14:37:02.528591] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.641 [2024-11-20 14:37:02.624248] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:55.641 [2024-11-20 14:37:02.632249] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:55.641 [2024-11-20 14:37:02.640250] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:55.641 [2024-11-20 14:37:02.648247] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:55.641 [2024-11-20 14:37:02.677326] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.641 passed 00:16:55.900 Test: admin_create_io_sq_verify_pc ...[2024-11-20 14:37:02.749756] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.900 [2024-11-20 14:37:02.766258] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:55.900 [2024-11-20 14:37:02.783922] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.900 passed 00:16:55.900 Test: admin_create_io_qp_max_qps ...[2024-11-20 14:37:02.858390] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.277 [2024-11-20 14:37:03.979251] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:57.535 [2024-11-20 14:37:04.356035] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.535 passed 00:16:57.535 Test: admin_create_io_sq_shared_cq ...[2024-11-20 14:37:04.430801] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.535 [2024-11-20 14:37:04.562247] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:57.794 [2024-11-20 14:37:04.599296] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.794 passed 00:16:57.794 00:16:57.794 Run Summary: Type Total Ran Passed Failed Inactive 00:16:57.794 suites 1 1 n/a 0 0 00:16:57.794 tests 18 18 18 0 0 00:16:57.794 asserts 360 360 360 0 n/a 00:16:57.794 00:16:57.794 Elapsed time = 1.499 seconds 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3865570 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3865570 ']' 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3865570 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3865570 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3865570' 00:16:57.794 killing process with pid 3865570 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3865570 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3865570 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:57.794 00:16:57.794 real 0m5.410s 00:16:57.794 user 0m15.404s 00:16:57.794 sys 0m0.382s 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.794 ************************************ 00:16:57.794 END TEST nvmf_vfio_user_nvme_compliance 00:16:57.794 ************************************ 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:57.794 ************************************ 00:16:57.794 START TEST nvmf_vfio_user_fuzz 00:16:57.794 ************************************ 00:16:57.794 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:58.055 * Looking for test storage... 00:16:58.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.055 --rc genhtml_branch_coverage=1 00:16:58.055 --rc genhtml_function_coverage=1 00:16:58.055 --rc genhtml_legend=1 00:16:58.055 --rc geninfo_all_blocks=1 00:16:58.055 --rc geninfo_unexecuted_blocks=1 00:16:58.055 00:16:58.055 ' 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.055 --rc genhtml_branch_coverage=1 00:16:58.055 --rc genhtml_function_coverage=1 00:16:58.055 --rc genhtml_legend=1 00:16:58.055 --rc geninfo_all_blocks=1 00:16:58.055 --rc geninfo_unexecuted_blocks=1 00:16:58.055 00:16:58.055 ' 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.055 --rc genhtml_branch_coverage=1 00:16:58.055 --rc genhtml_function_coverage=1 00:16:58.055 --rc genhtml_legend=1 00:16:58.055 --rc geninfo_all_blocks=1 00:16:58.055 --rc geninfo_unexecuted_blocks=1 00:16:58.055 00:16:58.055 ' 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.055 --rc genhtml_branch_coverage=1 00:16:58.055 --rc genhtml_function_coverage=1 00:16:58.055 --rc genhtml_legend=1 00:16:58.055 --rc geninfo_all_blocks=1 00:16:58.055 --rc geninfo_unexecuted_blocks=1 00:16:58.055 00:16:58.055 ' 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.055 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3866660 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3866660' 00:16:58.056 Process pid: 3866660 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3866660 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3866660 ']' 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:58.056 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:58.316 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.316 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:58.316 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:59.252 malloc0 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:59.252 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:31.352 Fuzzing completed. Shutting down the fuzz application 00:17:31.353 00:17:31.353 Dumping successful admin opcodes: 00:17:31.353 8, 9, 10, 24, 00:17:31.353 Dumping successful io opcodes: 00:17:31.353 0, 00:17:31.353 NS: 0x20000081ef00 I/O qp, Total commands completed: 1298466, total successful commands: 5089, random_seed: 936867520 00:17:31.353 NS: 0x20000081ef00 admin qp, Total commands completed: 301008, total successful commands: 2418, random_seed: 4146672704 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3866660 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3866660 ']' 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3866660 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3866660 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3866660' 00:17:31.353 killing process with pid 3866660 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3866660 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3866660 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:31.353 00:17:31.353 real 0m32.030s 00:17:31.353 user 0m33.370s 00:17:31.353 sys 0m26.542s 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 ************************************ 00:17:31.353 END TEST nvmf_vfio_user_fuzz 00:17:31.353 ************************************ 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 ************************************ 00:17:31.353 START TEST nvmf_auth_target 00:17:31.353 ************************************ 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:31.353 * Looking for test storage... 00:17:31.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:31.353 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.353 --rc genhtml_branch_coverage=1 00:17:31.353 --rc genhtml_function_coverage=1 00:17:31.353 --rc genhtml_legend=1 00:17:31.353 --rc geninfo_all_blocks=1 00:17:31.353 --rc geninfo_unexecuted_blocks=1 00:17:31.353 00:17:31.353 ' 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.353 --rc genhtml_branch_coverage=1 00:17:31.353 --rc genhtml_function_coverage=1 00:17:31.353 --rc genhtml_legend=1 00:17:31.353 --rc geninfo_all_blocks=1 00:17:31.353 --rc geninfo_unexecuted_blocks=1 00:17:31.353 00:17:31.353 ' 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.353 --rc genhtml_branch_coverage=1 00:17:31.353 --rc genhtml_function_coverage=1 00:17:31.353 --rc genhtml_legend=1 00:17:31.353 --rc geninfo_all_blocks=1 00:17:31.353 --rc geninfo_unexecuted_blocks=1 00:17:31.353 00:17:31.353 ' 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:31.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.353 --rc genhtml_branch_coverage=1 00:17:31.353 --rc genhtml_function_coverage=1 00:17:31.353 --rc genhtml_legend=1 00:17:31.353 --rc geninfo_all_blocks=1 00:17:31.353 --rc geninfo_unexecuted_blocks=1 00:17:31.353 00:17:31.353 ' 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.353 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:31.354 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:35.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:35.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:35.566 Found net devices under 0000:31:00.0: cvl_0_0 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:35.566 Found net devices under 0000:31:00.1: cvl_0_1 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:35.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:17:35.566 00:17:35.566 --- 10.0.0.2 ping statistics --- 00:17:35.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.566 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:17:35.566 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:17:35.567 00:17:35.567 --- 10.0.0.1 ping statistics --- 00:17:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.567 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3877602 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3877602 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3877602 ']' 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.567 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3877622 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=da54243b9431b98a5c146e870b1e04498d0486e74497907b 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VgM 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key da54243b9431b98a5c146e870b1e04498d0486e74497907b 0 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 da54243b9431b98a5c146e870b1e04498d0486e74497907b 0 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=da54243b9431b98a5c146e870b1e04498d0486e74497907b 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VgM 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VgM 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.VgM 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1970cad1e0890480bcd26b2fd90ea3726b37d47338d8ed6944cf8409cda54f89 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Y0B 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1970cad1e0890480bcd26b2fd90ea3726b37d47338d8ed6944cf8409cda54f89 3 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1970cad1e0890480bcd26b2fd90ea3726b37d47338d8ed6944cf8409cda54f89 3 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1970cad1e0890480bcd26b2fd90ea3726b37d47338d8ed6944cf8409cda54f89 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Y0B 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Y0B 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Y0B 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=542c174437f652e65c08c7cde952efa4 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.De2 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 542c174437f652e65c08c7cde952efa4 1 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 542c174437f652e65c08c7cde952efa4 1 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=542c174437f652e65c08c7cde952efa4 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.De2 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.De2 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.De2 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=23532d3c2fb961e62d3dcc025fb37e27d8e68c4f4eb68513 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zU4 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 23532d3c2fb961e62d3dcc025fb37e27d8e68c4f4eb68513 2 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 23532d3c2fb961e62d3dcc025fb37e27d8e68c4f4eb68513 2 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=23532d3c2fb961e62d3dcc025fb37e27d8e68c4f4eb68513 00:17:35.906 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zU4 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zU4 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.zU4 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=263694e2ebfc87027c61e174d16f9a9e13c4037a454a0e07 00:17:35.907 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.HT4 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 263694e2ebfc87027c61e174d16f9a9e13c4037a454a0e07 2 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 263694e2ebfc87027c61e174d16f9a9e13c4037a454a0e07 2 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=263694e2ebfc87027c61e174d16f9a9e13c4037a454a0e07 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.HT4 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.HT4 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.HT4 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:36.211 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=71201f6250ca6663fe7461eb80d78231 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1Ol 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 71201f6250ca6663fe7461eb80d78231 1 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 71201f6250ca6663fe7461eb80d78231 1 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=71201f6250ca6663fe7461eb80d78231 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:36.212 14:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1Ol 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1Ol 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.1Ol 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=27f53210c3962ca8ad1b95527c573e63e87d73065a5f7590e75d250e466baee6 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.w9E 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 27f53210c3962ca8ad1b95527c573e63e87d73065a5f7590e75d250e466baee6 3 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 27f53210c3962ca8ad1b95527c573e63e87d73065a5f7590e75d250e466baee6 3 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=27f53210c3962ca8ad1b95527c573e63e87d73065a5f7590e75d250e466baee6 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.w9E 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.w9E 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.w9E 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3877602 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3877602 ']' 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3877622 /var/tmp/host.sock 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3877622 ']' 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:36.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.212 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VgM 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.491 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.VgM 00:17:36.491 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.VgM 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Y0B ]] 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Y0B 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Y0B 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Y0B 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.De2 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.De2 00:17:36.750 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.De2 00:17:37.009 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.zU4 ]] 00:17:37.009 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zU4 00:17:37.009 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.009 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.009 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.009 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zU4 00:17:37.009 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zU4 00:17:37.009 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:37.009 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HT4 00:17:37.009 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.009 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.268 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.269 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.HT4 00:17:37.269 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.HT4 00:17:37.269 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.1Ol ]] 00:17:37.269 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Ol 00:17:37.269 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.269 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.269 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.269 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Ol 00:17:37.269 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Ol 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.w9E 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.w9E 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.w9E 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:37.528 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.787 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.045 00:17:38.045 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.045 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.046 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.304 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.304 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.304 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.304 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.304 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.304 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.304 { 00:17:38.304 "cntlid": 1, 00:17:38.304 "qid": 0, 00:17:38.304 "state": "enabled", 00:17:38.304 "thread": "nvmf_tgt_poll_group_000", 00:17:38.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:38.304 "listen_address": { 00:17:38.304 "trtype": "TCP", 00:17:38.304 "adrfam": "IPv4", 00:17:38.304 "traddr": "10.0.0.2", 00:17:38.304 "trsvcid": "4420" 00:17:38.304 }, 00:17:38.304 "peer_address": { 00:17:38.304 "trtype": "TCP", 00:17:38.304 "adrfam": "IPv4", 00:17:38.304 "traddr": "10.0.0.1", 00:17:38.304 "trsvcid": "52874" 00:17:38.304 }, 00:17:38.304 "auth": { 00:17:38.304 "state": "completed", 00:17:38.304 "digest": "sha256", 00:17:38.304 "dhgroup": "null" 00:17:38.304 } 00:17:38.304 } 00:17:38.304 ]' 00:17:38.305 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.305 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.305 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.305 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:38.305 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.305 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.305 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.305 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.564 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:17:38.564 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:17:39.133 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.133 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:39.133 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.133 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.133 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.133 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.133 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:39.133 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:39.392 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:39.392 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.392 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.393 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.393 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.652 { 00:17:39.652 "cntlid": 3, 00:17:39.652 "qid": 0, 00:17:39.652 "state": "enabled", 00:17:39.652 "thread": "nvmf_tgt_poll_group_000", 00:17:39.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:39.652 "listen_address": { 00:17:39.652 "trtype": "TCP", 00:17:39.652 "adrfam": "IPv4", 00:17:39.652 "traddr": "10.0.0.2", 00:17:39.652 "trsvcid": "4420" 00:17:39.652 }, 00:17:39.652 "peer_address": { 00:17:39.652 "trtype": "TCP", 00:17:39.652 "adrfam": "IPv4", 00:17:39.652 "traddr": "10.0.0.1", 00:17:39.652 "trsvcid": "52904" 00:17:39.652 }, 00:17:39.652 "auth": { 00:17:39.652 "state": "completed", 00:17:39.652 "digest": "sha256", 00:17:39.652 "dhgroup": "null" 00:17:39.652 } 00:17:39.652 } 00:17:39.652 ]' 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.652 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:39.653 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.912 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.912 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.912 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.913 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:17:39.913 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:17:40.482 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.482 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:40.482 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.482 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.482 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.482 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.482 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:40.482 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.743 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.003 00:17:41.003 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.003 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.003 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.003 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.003 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.003 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.003 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.003 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.003 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.003 { 00:17:41.003 "cntlid": 5, 00:17:41.003 "qid": 0, 00:17:41.003 "state": "enabled", 00:17:41.003 "thread": "nvmf_tgt_poll_group_000", 00:17:41.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:41.003 "listen_address": { 00:17:41.003 "trtype": "TCP", 00:17:41.003 "adrfam": "IPv4", 00:17:41.003 "traddr": "10.0.0.2", 00:17:41.003 "trsvcid": "4420" 00:17:41.003 }, 00:17:41.003 "peer_address": { 00:17:41.003 "trtype": "TCP", 00:17:41.003 "adrfam": "IPv4", 00:17:41.003 "traddr": "10.0.0.1", 00:17:41.003 "trsvcid": "52914" 00:17:41.003 }, 00:17:41.003 "auth": { 00:17:41.003 "state": "completed", 00:17:41.003 "digest": "sha256", 00:17:41.003 "dhgroup": "null" 00:17:41.003 } 00:17:41.003 } 00:17:41.003 ]' 00:17:41.003 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.003 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.003 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.003 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.263 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.263 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.263 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.263 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.263 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:17:41.263 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:17:41.831 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.831 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:41.831 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.831 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.831 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.831 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.831 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.831 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.091 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.351 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.351 { 00:17:42.351 "cntlid": 7, 00:17:42.351 "qid": 0, 00:17:42.351 "state": "enabled", 00:17:42.351 "thread": "nvmf_tgt_poll_group_000", 00:17:42.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:42.351 "listen_address": { 00:17:42.351 "trtype": "TCP", 00:17:42.351 "adrfam": "IPv4", 00:17:42.351 "traddr": "10.0.0.2", 00:17:42.351 "trsvcid": "4420" 00:17:42.351 }, 00:17:42.351 "peer_address": { 00:17:42.351 "trtype": "TCP", 00:17:42.351 "adrfam": "IPv4", 00:17:42.351 "traddr": "10.0.0.1", 00:17:42.351 "trsvcid": "52930" 00:17:42.351 }, 00:17:42.351 "auth": { 00:17:42.351 "state": "completed", 00:17:42.351 "digest": "sha256", 00:17:42.351 "dhgroup": "null" 00:17:42.351 } 00:17:42.351 } 00:17:42.351 ]' 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.351 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.611 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.611 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.611 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.611 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.611 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.611 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:17:42.611 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:17:43.180 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.180 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:43.180 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.180 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.180 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.180 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.180 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.180 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:43.180 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.439 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.698 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.698 { 00:17:43.698 "cntlid": 9, 00:17:43.698 "qid": 0, 00:17:43.698 "state": "enabled", 00:17:43.698 "thread": "nvmf_tgt_poll_group_000", 00:17:43.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:43.698 "listen_address": { 00:17:43.698 "trtype": "TCP", 00:17:43.698 "adrfam": "IPv4", 00:17:43.698 "traddr": "10.0.0.2", 00:17:43.698 "trsvcid": "4420" 00:17:43.698 }, 00:17:43.698 "peer_address": { 00:17:43.698 "trtype": "TCP", 00:17:43.698 "adrfam": "IPv4", 00:17:43.698 "traddr": "10.0.0.1", 00:17:43.698 "trsvcid": "52966" 00:17:43.698 }, 00:17:43.698 "auth": { 00:17:43.698 "state": "completed", 00:17:43.698 "digest": "sha256", 00:17:43.698 "dhgroup": "ffdhe2048" 00:17:43.698 } 00:17:43.698 } 00:17:43.698 ]' 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.698 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.957 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.957 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.957 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.957 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.957 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.957 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:17:43.957 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:17:44.525 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.525 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:44.525 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.525 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.525 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.525 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.525 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.525 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.785 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.044 00:17:45.044 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.044 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.044 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.044 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.044 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.044 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.044 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.302 { 00:17:45.302 "cntlid": 11, 00:17:45.302 "qid": 0, 00:17:45.302 "state": "enabled", 00:17:45.302 "thread": "nvmf_tgt_poll_group_000", 00:17:45.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:45.302 "listen_address": { 00:17:45.302 "trtype": "TCP", 00:17:45.302 "adrfam": "IPv4", 00:17:45.302 "traddr": "10.0.0.2", 00:17:45.302 "trsvcid": "4420" 00:17:45.302 }, 00:17:45.302 "peer_address": { 00:17:45.302 "trtype": "TCP", 00:17:45.302 "adrfam": "IPv4", 00:17:45.302 "traddr": "10.0.0.1", 00:17:45.302 "trsvcid": "52978" 00:17:45.302 }, 00:17:45.302 "auth": { 00:17:45.302 "state": "completed", 00:17:45.302 "digest": "sha256", 00:17:45.302 "dhgroup": "ffdhe2048" 00:17:45.302 } 00:17:45.302 } 00:17:45.302 ]' 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:17:45.302 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:17:45.870 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.870 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:45.870 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.870 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.870 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.870 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.870 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.870 14:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.129 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.130 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.130 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.389 00:17:46.389 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.389 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.389 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.648 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.648 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.648 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.648 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.648 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.648 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.648 { 00:17:46.648 "cntlid": 13, 00:17:46.648 "qid": 0, 00:17:46.648 "state": "enabled", 00:17:46.648 "thread": "nvmf_tgt_poll_group_000", 00:17:46.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:46.648 "listen_address": { 00:17:46.648 "trtype": "TCP", 00:17:46.648 "adrfam": "IPv4", 00:17:46.648 "traddr": "10.0.0.2", 00:17:46.648 "trsvcid": "4420" 00:17:46.648 }, 00:17:46.648 "peer_address": { 00:17:46.648 "trtype": "TCP", 00:17:46.648 "adrfam": "IPv4", 00:17:46.648 "traddr": "10.0.0.1", 00:17:46.648 "trsvcid": "53000" 00:17:46.648 }, 00:17:46.648 "auth": { 00:17:46.648 "state": "completed", 00:17:46.648 "digest": "sha256", 00:17:46.648 "dhgroup": "ffdhe2048" 00:17:46.648 } 00:17:46.649 } 00:17:46.649 ]' 00:17:46.649 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.649 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.649 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.649 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.649 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.649 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.649 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.649 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.908 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:17:46.908 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.476 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.736 00:17:47.736 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.736 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.736 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.996 { 00:17:47.996 "cntlid": 15, 00:17:47.996 "qid": 0, 00:17:47.996 "state": "enabled", 00:17:47.996 "thread": "nvmf_tgt_poll_group_000", 00:17:47.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:47.996 "listen_address": { 00:17:47.996 "trtype": "TCP", 00:17:47.996 "adrfam": "IPv4", 00:17:47.996 "traddr": "10.0.0.2", 00:17:47.996 "trsvcid": "4420" 00:17:47.996 }, 00:17:47.996 "peer_address": { 00:17:47.996 "trtype": "TCP", 00:17:47.996 "adrfam": "IPv4", 00:17:47.996 "traddr": "10.0.0.1", 00:17:47.996 "trsvcid": "44840" 00:17:47.996 }, 00:17:47.996 "auth": { 00:17:47.996 "state": "completed", 00:17:47.996 "digest": "sha256", 00:17:47.996 "dhgroup": "ffdhe2048" 00:17:47.996 } 00:17:47.996 } 00:17:47.996 ]' 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.996 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.997 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.256 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:17:48.256 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.883 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.884 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.144 00:17:49.144 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.144 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.144 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.404 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.405 { 00:17:49.405 "cntlid": 17, 00:17:49.405 "qid": 0, 00:17:49.405 "state": "enabled", 00:17:49.405 "thread": "nvmf_tgt_poll_group_000", 00:17:49.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:49.405 "listen_address": { 00:17:49.405 "trtype": "TCP", 00:17:49.405 "adrfam": "IPv4", 00:17:49.405 "traddr": "10.0.0.2", 00:17:49.405 "trsvcid": "4420" 00:17:49.405 }, 00:17:49.405 "peer_address": { 00:17:49.405 "trtype": "TCP", 00:17:49.405 "adrfam": "IPv4", 00:17:49.405 "traddr": "10.0.0.1", 00:17:49.405 "trsvcid": "44858" 00:17:49.405 }, 00:17:49.405 "auth": { 00:17:49.405 "state": "completed", 00:17:49.405 "digest": "sha256", 00:17:49.405 "dhgroup": "ffdhe3072" 00:17:49.405 } 00:17:49.405 } 00:17:49.405 ]' 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.405 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.665 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:17:49.665 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.234 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.235 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.235 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.235 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.235 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.494 00:17:50.494 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.494 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.494 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.754 { 00:17:50.754 "cntlid": 19, 00:17:50.754 "qid": 0, 00:17:50.754 "state": "enabled", 00:17:50.754 "thread": "nvmf_tgt_poll_group_000", 00:17:50.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:50.754 "listen_address": { 00:17:50.754 "trtype": "TCP", 00:17:50.754 "adrfam": "IPv4", 00:17:50.754 "traddr": "10.0.0.2", 00:17:50.754 "trsvcid": "4420" 00:17:50.754 }, 00:17:50.754 "peer_address": { 00:17:50.754 "trtype": "TCP", 00:17:50.754 "adrfam": "IPv4", 00:17:50.754 "traddr": "10.0.0.1", 00:17:50.754 "trsvcid": "44892" 00:17:50.754 }, 00:17:50.754 "auth": { 00:17:50.754 "state": "completed", 00:17:50.754 "digest": "sha256", 00:17:50.754 "dhgroup": "ffdhe3072" 00:17:50.754 } 00:17:50.754 } 00:17:50.754 ]' 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.754 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.755 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.015 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:17:51.015 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.586 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.846 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.846 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.846 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.846 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.846 00:17:51.846 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.846 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.846 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.106 { 00:17:52.106 "cntlid": 21, 00:17:52.106 "qid": 0, 00:17:52.106 "state": "enabled", 00:17:52.106 "thread": "nvmf_tgt_poll_group_000", 00:17:52.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:52.106 "listen_address": { 00:17:52.106 "trtype": "TCP", 00:17:52.106 "adrfam": "IPv4", 00:17:52.106 "traddr": "10.0.0.2", 00:17:52.106 "trsvcid": "4420" 00:17:52.106 }, 00:17:52.106 "peer_address": { 00:17:52.106 "trtype": "TCP", 00:17:52.106 "adrfam": "IPv4", 00:17:52.106 "traddr": "10.0.0.1", 00:17:52.106 "trsvcid": "44932" 00:17:52.106 }, 00:17:52.106 "auth": { 00:17:52.106 "state": "completed", 00:17:52.106 "digest": "sha256", 00:17:52.106 "dhgroup": "ffdhe3072" 00:17:52.106 } 00:17:52.106 } 00:17:52.106 ]' 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.106 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.365 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:17:52.365 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:17:52.935 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.935 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:52.935 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.935 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.935 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.935 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.935 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.935 14:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.196 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.196 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.456 { 00:17:53.456 "cntlid": 23, 00:17:53.456 "qid": 0, 00:17:53.456 "state": "enabled", 00:17:53.456 "thread": "nvmf_tgt_poll_group_000", 00:17:53.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:53.456 "listen_address": { 00:17:53.456 "trtype": "TCP", 00:17:53.456 "adrfam": "IPv4", 00:17:53.456 "traddr": "10.0.0.2", 00:17:53.456 "trsvcid": "4420" 00:17:53.456 }, 00:17:53.456 "peer_address": { 00:17:53.456 "trtype": "TCP", 00:17:53.456 "adrfam": "IPv4", 00:17:53.456 "traddr": "10.0.0.1", 00:17:53.456 "trsvcid": "44956" 00:17:53.456 }, 00:17:53.456 "auth": { 00:17:53.456 "state": "completed", 00:17:53.456 "digest": "sha256", 00:17:53.456 "dhgroup": "ffdhe3072" 00:17:53.456 } 00:17:53.456 } 00:17:53.456 ]' 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.456 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.715 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.715 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.715 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.716 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:17:53.716 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:17:54.284 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.544 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.804 00:17:54.804 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.804 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.804 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.063 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.063 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.063 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.063 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.063 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.063 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.063 { 00:17:55.063 "cntlid": 25, 00:17:55.063 "qid": 0, 00:17:55.063 "state": "enabled", 00:17:55.063 "thread": "nvmf_tgt_poll_group_000", 00:17:55.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:55.063 "listen_address": { 00:17:55.063 "trtype": "TCP", 00:17:55.063 "adrfam": "IPv4", 00:17:55.063 "traddr": "10.0.0.2", 00:17:55.063 "trsvcid": "4420" 00:17:55.063 }, 00:17:55.063 "peer_address": { 00:17:55.063 "trtype": "TCP", 00:17:55.063 "adrfam": "IPv4", 00:17:55.063 "traddr": "10.0.0.1", 00:17:55.063 "trsvcid": "44986" 00:17:55.063 }, 00:17:55.063 "auth": { 00:17:55.063 "state": "completed", 00:17:55.063 "digest": "sha256", 00:17:55.063 "dhgroup": "ffdhe4096" 00:17:55.063 } 00:17:55.063 } 00:17:55.063 ]' 00:17:55.063 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.063 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.063 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.063 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.063 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.063 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.063 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.063 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.323 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:17:55.323 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.891 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.150 00:17:56.150 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.150 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.150 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.410 { 00:17:56.410 "cntlid": 27, 00:17:56.410 "qid": 0, 00:17:56.410 "state": "enabled", 00:17:56.410 "thread": "nvmf_tgt_poll_group_000", 00:17:56.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:56.410 "listen_address": { 00:17:56.410 "trtype": "TCP", 00:17:56.410 "adrfam": "IPv4", 00:17:56.410 "traddr": "10.0.0.2", 00:17:56.410 "trsvcid": "4420" 00:17:56.410 }, 00:17:56.410 "peer_address": { 00:17:56.410 "trtype": "TCP", 00:17:56.410 "adrfam": "IPv4", 00:17:56.410 "traddr": "10.0.0.1", 00:17:56.410 "trsvcid": "45016" 00:17:56.410 }, 00:17:56.410 "auth": { 00:17:56.410 "state": "completed", 00:17:56.410 "digest": "sha256", 00:17:56.410 "dhgroup": "ffdhe4096" 00:17:56.410 } 00:17:56.410 } 00:17:56.410 ]' 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.410 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.670 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:17:56.670 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:17:57.239 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.239 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:57.239 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.239 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.239 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.239 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.239 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:57.239 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.499 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.759 00:17:57.759 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.759 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.759 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.759 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.759 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.759 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.759 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.759 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.759 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.759 { 00:17:57.759 "cntlid": 29, 00:17:57.759 "qid": 0, 00:17:57.759 "state": "enabled", 00:17:57.759 "thread": "nvmf_tgt_poll_group_000", 00:17:57.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:57.759 "listen_address": { 00:17:57.759 "trtype": "TCP", 00:17:57.759 "adrfam": "IPv4", 00:17:57.759 "traddr": "10.0.0.2", 00:17:57.759 "trsvcid": "4420" 00:17:57.759 }, 00:17:57.759 "peer_address": { 00:17:57.759 "trtype": "TCP", 00:17:57.759 "adrfam": "IPv4", 00:17:57.759 "traddr": "10.0.0.1", 00:17:57.759 "trsvcid": "43070" 00:17:57.759 }, 00:17:57.759 "auth": { 00:17:57.759 "state": "completed", 00:17:57.759 "digest": "sha256", 00:17:57.759 "dhgroup": "ffdhe4096" 00:17:57.759 } 00:17:57.759 } 00:17:57.759 ]' 00:17:57.760 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.760 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.760 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.019 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.019 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.019 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.019 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.019 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.019 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:17:58.019 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:17:58.589 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.589 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:58.589 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.589 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.589 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.589 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.589 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.589 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.848 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.108 00:17:59.108 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.108 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.108 14:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.108 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.108 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.108 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.108 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.367 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.367 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.367 { 00:17:59.367 "cntlid": 31, 00:17:59.367 "qid": 0, 00:17:59.368 "state": "enabled", 00:17:59.368 "thread": "nvmf_tgt_poll_group_000", 00:17:59.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:59.368 "listen_address": { 00:17:59.368 "trtype": "TCP", 00:17:59.368 "adrfam": "IPv4", 00:17:59.368 "traddr": "10.0.0.2", 00:17:59.368 "trsvcid": "4420" 00:17:59.368 }, 00:17:59.368 "peer_address": { 00:17:59.368 "trtype": "TCP", 00:17:59.368 "adrfam": "IPv4", 00:17:59.368 "traddr": "10.0.0.1", 00:17:59.368 "trsvcid": "43096" 00:17:59.368 }, 00:17:59.368 "auth": { 00:17:59.368 "state": "completed", 00:17:59.368 "digest": "sha256", 00:17:59.368 "dhgroup": "ffdhe4096" 00:17:59.368 } 00:17:59.368 } 00:17:59.368 ]' 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:17:59.368 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:17:59.936 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.936 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:59.936 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.936 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.936 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.936 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.936 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.936 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:59.937 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.196 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.455 00:18:00.455 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.455 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.455 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.715 { 00:18:00.715 "cntlid": 33, 00:18:00.715 "qid": 0, 00:18:00.715 "state": "enabled", 00:18:00.715 "thread": "nvmf_tgt_poll_group_000", 00:18:00.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:00.715 "listen_address": { 00:18:00.715 "trtype": "TCP", 00:18:00.715 "adrfam": "IPv4", 00:18:00.715 "traddr": "10.0.0.2", 00:18:00.715 "trsvcid": "4420" 00:18:00.715 }, 00:18:00.715 "peer_address": { 00:18:00.715 "trtype": "TCP", 00:18:00.715 "adrfam": "IPv4", 00:18:00.715 "traddr": "10.0.0.1", 00:18:00.715 "trsvcid": "43126" 00:18:00.715 }, 00:18:00.715 "auth": { 00:18:00.715 "state": "completed", 00:18:00.715 "digest": "sha256", 00:18:00.715 "dhgroup": "ffdhe6144" 00:18:00.715 } 00:18:00.715 } 00:18:00.715 ]' 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.715 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.975 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:00.975 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:01.545 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.545 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:01.545 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.545 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.545 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.545 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.545 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:01.545 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.805 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.064 00:18:02.064 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.064 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.064 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.064 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.064 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.064 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.064 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.322 { 00:18:02.322 "cntlid": 35, 00:18:02.322 "qid": 0, 00:18:02.322 "state": "enabled", 00:18:02.322 "thread": "nvmf_tgt_poll_group_000", 00:18:02.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:02.322 "listen_address": { 00:18:02.322 "trtype": "TCP", 00:18:02.322 "adrfam": "IPv4", 00:18:02.322 "traddr": "10.0.0.2", 00:18:02.322 "trsvcid": "4420" 00:18:02.322 }, 00:18:02.322 "peer_address": { 00:18:02.322 "trtype": "TCP", 00:18:02.322 "adrfam": "IPv4", 00:18:02.322 "traddr": "10.0.0.1", 00:18:02.322 "trsvcid": "43158" 00:18:02.322 }, 00:18:02.322 "auth": { 00:18:02.322 "state": "completed", 00:18:02.322 "digest": "sha256", 00:18:02.322 "dhgroup": "ffdhe6144" 00:18:02.322 } 00:18:02.322 } 00:18:02.322 ]' 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.322 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.581 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:02.581 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:03.149 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.149 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:03.149 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.149 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.149 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.149 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:03.149 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.149 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.408 00:18:03.408 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.408 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.408 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.666 { 00:18:03.666 "cntlid": 37, 00:18:03.666 "qid": 0, 00:18:03.666 "state": "enabled", 00:18:03.666 "thread": "nvmf_tgt_poll_group_000", 00:18:03.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:03.666 "listen_address": { 00:18:03.666 "trtype": "TCP", 00:18:03.666 "adrfam": "IPv4", 00:18:03.666 "traddr": "10.0.0.2", 00:18:03.666 "trsvcid": "4420" 00:18:03.666 }, 00:18:03.666 "peer_address": { 00:18:03.666 "trtype": "TCP", 00:18:03.666 "adrfam": "IPv4", 00:18:03.666 "traddr": "10.0.0.1", 00:18:03.666 "trsvcid": "43178" 00:18:03.666 }, 00:18:03.666 "auth": { 00:18:03.666 "state": "completed", 00:18:03.666 "digest": "sha256", 00:18:03.666 "dhgroup": "ffdhe6144" 00:18:03.666 } 00:18:03.666 } 00:18:03.666 ]' 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.666 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.925 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:03.925 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:04.493 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.493 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:04.493 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.493 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.493 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.493 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.493 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:04.493 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.752 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.011 00:18:05.011 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.011 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.011 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.270 { 00:18:05.270 "cntlid": 39, 00:18:05.270 "qid": 0, 00:18:05.270 "state": "enabled", 00:18:05.270 "thread": "nvmf_tgt_poll_group_000", 00:18:05.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:05.270 "listen_address": { 00:18:05.270 "trtype": "TCP", 00:18:05.270 "adrfam": "IPv4", 00:18:05.270 "traddr": "10.0.0.2", 00:18:05.270 "trsvcid": "4420" 00:18:05.270 }, 00:18:05.270 "peer_address": { 00:18:05.270 "trtype": "TCP", 00:18:05.270 "adrfam": "IPv4", 00:18:05.270 "traddr": "10.0.0.1", 00:18:05.270 "trsvcid": "43212" 00:18:05.270 }, 00:18:05.270 "auth": { 00:18:05.270 "state": "completed", 00:18:05.270 "digest": "sha256", 00:18:05.270 "dhgroup": "ffdhe6144" 00:18:05.270 } 00:18:05.270 } 00:18:05.270 ]' 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.270 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.529 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:05.529 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:06.097 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.097 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:06.097 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.097 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.097 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.098 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.098 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.098 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:06.098 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.098 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.665 00:18:06.665 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.665 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.665 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.924 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.925 { 00:18:06.925 "cntlid": 41, 00:18:06.925 "qid": 0, 00:18:06.925 "state": "enabled", 00:18:06.925 "thread": "nvmf_tgt_poll_group_000", 00:18:06.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:06.925 "listen_address": { 00:18:06.925 "trtype": "TCP", 00:18:06.925 "adrfam": "IPv4", 00:18:06.925 "traddr": "10.0.0.2", 00:18:06.925 "trsvcid": "4420" 00:18:06.925 }, 00:18:06.925 "peer_address": { 00:18:06.925 "trtype": "TCP", 00:18:06.925 "adrfam": "IPv4", 00:18:06.925 "traddr": "10.0.0.1", 00:18:06.925 "trsvcid": "43236" 00:18:06.925 }, 00:18:06.925 "auth": { 00:18:06.925 "state": "completed", 00:18:06.925 "digest": "sha256", 00:18:06.925 "dhgroup": "ffdhe8192" 00:18:06.925 } 00:18:06.925 } 00:18:06.925 ]' 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.925 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.183 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:07.183 14:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.751 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.322 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.322 { 00:18:08.322 "cntlid": 43, 00:18:08.322 "qid": 0, 00:18:08.322 "state": "enabled", 00:18:08.322 "thread": "nvmf_tgt_poll_group_000", 00:18:08.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:08.322 "listen_address": { 00:18:08.322 "trtype": "TCP", 00:18:08.322 "adrfam": "IPv4", 00:18:08.322 "traddr": "10.0.0.2", 00:18:08.322 "trsvcid": "4420" 00:18:08.322 }, 00:18:08.322 "peer_address": { 00:18:08.322 "trtype": "TCP", 00:18:08.322 "adrfam": "IPv4", 00:18:08.322 "traddr": "10.0.0.1", 00:18:08.322 "trsvcid": "34474" 00:18:08.322 }, 00:18:08.322 "auth": { 00:18:08.322 "state": "completed", 00:18:08.322 "digest": "sha256", 00:18:08.322 "dhgroup": "ffdhe8192" 00:18:08.322 } 00:18:08.322 } 00:18:08.322 ]' 00:18:08.322 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.582 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.582 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.582 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.582 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.582 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.582 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.582 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.582 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:08.582 14:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:09.153 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.153 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:09.153 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.153 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.153 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.153 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.153 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:09.153 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.412 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.981 00:18:09.981 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.981 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.981 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.981 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.981 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.981 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.981 14:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.981 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.981 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.981 { 00:18:09.981 "cntlid": 45, 00:18:09.981 "qid": 0, 00:18:09.981 "state": "enabled", 00:18:09.981 "thread": "nvmf_tgt_poll_group_000", 00:18:09.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:09.981 "listen_address": { 00:18:09.981 "trtype": "TCP", 00:18:09.981 "adrfam": "IPv4", 00:18:09.981 "traddr": "10.0.0.2", 00:18:09.981 "trsvcid": "4420" 00:18:09.981 }, 00:18:09.981 "peer_address": { 00:18:09.981 "trtype": "TCP", 00:18:09.981 "adrfam": "IPv4", 00:18:09.981 "traddr": "10.0.0.1", 00:18:09.981 "trsvcid": "34510" 00:18:09.981 }, 00:18:09.981 "auth": { 00:18:09.981 "state": "completed", 00:18:09.981 "digest": "sha256", 00:18:09.981 "dhgroup": "ffdhe8192" 00:18:09.981 } 00:18:09.981 } 00:18:09.981 ]' 00:18:09.981 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.981 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.981 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.241 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.241 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.241 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.241 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.241 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.241 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:10.241 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:10.811 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.811 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:10.811 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.811 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.811 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.811 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.811 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.811 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.071 14:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.641 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.641 { 00:18:11.641 "cntlid": 47, 00:18:11.641 "qid": 0, 00:18:11.641 "state": "enabled", 00:18:11.641 "thread": "nvmf_tgt_poll_group_000", 00:18:11.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:11.641 "listen_address": { 00:18:11.641 "trtype": "TCP", 00:18:11.641 "adrfam": "IPv4", 00:18:11.641 "traddr": "10.0.0.2", 00:18:11.641 "trsvcid": "4420" 00:18:11.641 }, 00:18:11.641 "peer_address": { 00:18:11.641 "trtype": "TCP", 00:18:11.641 "adrfam": "IPv4", 00:18:11.641 "traddr": "10.0.0.1", 00:18:11.641 "trsvcid": "34536" 00:18:11.641 }, 00:18:11.641 "auth": { 00:18:11.641 "state": "completed", 00:18:11.641 "digest": "sha256", 00:18:11.641 "dhgroup": "ffdhe8192" 00:18:11.641 } 00:18:11.641 } 00:18:11.641 ]' 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.641 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.901 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.901 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.901 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.901 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:11.901 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:12.470 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.470 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:12.470 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.470 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.470 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.470 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:12.471 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.471 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.471 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.471 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.730 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:12.730 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.730 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.731 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.989 00:18:12.989 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.989 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.989 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.990 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.990 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.990 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.990 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.990 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.990 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.990 { 00:18:12.990 "cntlid": 49, 00:18:12.990 "qid": 0, 00:18:12.990 "state": "enabled", 00:18:12.990 "thread": "nvmf_tgt_poll_group_000", 00:18:12.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:12.990 "listen_address": { 00:18:12.990 "trtype": "TCP", 00:18:12.990 "adrfam": "IPv4", 00:18:12.990 "traddr": "10.0.0.2", 00:18:12.990 "trsvcid": "4420" 00:18:12.990 }, 00:18:12.990 "peer_address": { 00:18:12.990 "trtype": "TCP", 00:18:12.990 "adrfam": "IPv4", 00:18:12.990 "traddr": "10.0.0.1", 00:18:12.990 "trsvcid": "34556" 00:18:12.990 }, 00:18:12.990 "auth": { 00:18:12.990 "state": "completed", 00:18:12.990 "digest": "sha384", 00:18:12.990 "dhgroup": "null" 00:18:12.990 } 00:18:12.990 } 00:18:12.990 ]' 00:18:12.990 14:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.990 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.990 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.249 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:13.249 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.249 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.249 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.249 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.249 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:13.249 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:13.889 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.889 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:13.889 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.889 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.889 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.889 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.889 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.889 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:14.148 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:14.148 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.148 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.148 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:14.148 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:14.148 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.149 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.149 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.149 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.149 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.149 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.149 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.149 14:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.149 00:18:14.149 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.149 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.149 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.408 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.408 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.409 { 00:18:14.409 "cntlid": 51, 00:18:14.409 "qid": 0, 00:18:14.409 "state": "enabled", 00:18:14.409 "thread": "nvmf_tgt_poll_group_000", 00:18:14.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:14.409 "listen_address": { 00:18:14.409 "trtype": "TCP", 00:18:14.409 "adrfam": "IPv4", 00:18:14.409 "traddr": "10.0.0.2", 00:18:14.409 "trsvcid": "4420" 00:18:14.409 }, 00:18:14.409 "peer_address": { 00:18:14.409 "trtype": "TCP", 00:18:14.409 "adrfam": "IPv4", 00:18:14.409 "traddr": "10.0.0.1", 00:18:14.409 "trsvcid": "34574" 00:18:14.409 }, 00:18:14.409 "auth": { 00:18:14.409 "state": "completed", 00:18:14.409 "digest": "sha384", 00:18:14.409 "dhgroup": "null" 00:18:14.409 } 00:18:14.409 } 00:18:14.409 ]' 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.409 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.668 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:14.668 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:15.237 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.237 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:15.237 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.237 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.237 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.237 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.237 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.237 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.496 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.756 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.756 { 00:18:15.756 "cntlid": 53, 00:18:15.756 "qid": 0, 00:18:15.756 "state": "enabled", 00:18:15.756 "thread": "nvmf_tgt_poll_group_000", 00:18:15.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:15.756 "listen_address": { 00:18:15.756 "trtype": "TCP", 00:18:15.756 "adrfam": "IPv4", 00:18:15.756 "traddr": "10.0.0.2", 00:18:15.756 "trsvcid": "4420" 00:18:15.756 }, 00:18:15.756 "peer_address": { 00:18:15.756 "trtype": "TCP", 00:18:15.756 "adrfam": "IPv4", 00:18:15.756 "traddr": "10.0.0.1", 00:18:15.756 "trsvcid": "34616" 00:18:15.756 }, 00:18:15.756 "auth": { 00:18:15.756 "state": "completed", 00:18:15.756 "digest": "sha384", 00:18:15.756 "dhgroup": "null" 00:18:15.756 } 00:18:15.756 } 00:18:15.756 ]' 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:15.756 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.016 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.016 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.016 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.016 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:16.016 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:16.585 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.585 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:16.585 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.585 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.585 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.585 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.585 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:16.585 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.845 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.105 00:18:17.105 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.105 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.105 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.105 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.105 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.105 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.105 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.105 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.105 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.105 { 00:18:17.105 "cntlid": 55, 00:18:17.105 "qid": 0, 00:18:17.105 "state": "enabled", 00:18:17.105 "thread": "nvmf_tgt_poll_group_000", 00:18:17.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:17.105 "listen_address": { 00:18:17.105 "trtype": "TCP", 00:18:17.105 "adrfam": "IPv4", 00:18:17.105 "traddr": "10.0.0.2", 00:18:17.105 "trsvcid": "4420" 00:18:17.105 }, 00:18:17.105 "peer_address": { 00:18:17.105 "trtype": "TCP", 00:18:17.105 "adrfam": "IPv4", 00:18:17.105 "traddr": "10.0.0.1", 00:18:17.105 "trsvcid": "34642" 00:18:17.105 }, 00:18:17.105 "auth": { 00:18:17.105 "state": "completed", 00:18:17.105 "digest": "sha384", 00:18:17.105 "dhgroup": "null" 00:18:17.105 } 00:18:17.105 } 00:18:17.105 ]' 00:18:17.105 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.364 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.364 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.364 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:17.364 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.364 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.364 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.364 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.364 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:17.364 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:17.932 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.932 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:17.932 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.932 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.932 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.932 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.932 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.932 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.932 14:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.191 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.451 00:18:18.451 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.451 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.451 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.710 { 00:18:18.710 "cntlid": 57, 00:18:18.710 "qid": 0, 00:18:18.710 "state": "enabled", 00:18:18.710 "thread": "nvmf_tgt_poll_group_000", 00:18:18.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:18.710 "listen_address": { 00:18:18.710 "trtype": "TCP", 00:18:18.710 "adrfam": "IPv4", 00:18:18.710 "traddr": "10.0.0.2", 00:18:18.710 "trsvcid": "4420" 00:18:18.710 }, 00:18:18.710 "peer_address": { 00:18:18.710 "trtype": "TCP", 00:18:18.710 "adrfam": "IPv4", 00:18:18.710 "traddr": "10.0.0.1", 00:18:18.710 "trsvcid": "41158" 00:18:18.710 }, 00:18:18.710 "auth": { 00:18:18.710 "state": "completed", 00:18:18.710 "digest": "sha384", 00:18:18.710 "dhgroup": "ffdhe2048" 00:18:18.710 } 00:18:18.710 } 00:18:18.710 ]' 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.710 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.996 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:18.996 14:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.565 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.825 00:18:19.825 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.825 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.825 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.085 { 00:18:20.085 "cntlid": 59, 00:18:20.085 "qid": 0, 00:18:20.085 "state": "enabled", 00:18:20.085 "thread": "nvmf_tgt_poll_group_000", 00:18:20.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:20.085 "listen_address": { 00:18:20.085 "trtype": "TCP", 00:18:20.085 "adrfam": "IPv4", 00:18:20.085 "traddr": "10.0.0.2", 00:18:20.085 "trsvcid": "4420" 00:18:20.085 }, 00:18:20.085 "peer_address": { 00:18:20.085 "trtype": "TCP", 00:18:20.085 "adrfam": "IPv4", 00:18:20.085 "traddr": "10.0.0.1", 00:18:20.085 "trsvcid": "41194" 00:18:20.085 }, 00:18:20.085 "auth": { 00:18:20.085 "state": "completed", 00:18:20.085 "digest": "sha384", 00:18:20.085 "dhgroup": "ffdhe2048" 00:18:20.085 } 00:18:20.085 } 00:18:20.085 ]' 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.085 14:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.345 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:20.345 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.913 14:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.172 00:18:21.172 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.172 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.172 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.430 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.431 { 00:18:21.431 "cntlid": 61, 00:18:21.431 "qid": 0, 00:18:21.431 "state": "enabled", 00:18:21.431 "thread": "nvmf_tgt_poll_group_000", 00:18:21.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:21.431 "listen_address": { 00:18:21.431 "trtype": "TCP", 00:18:21.431 "adrfam": "IPv4", 00:18:21.431 "traddr": "10.0.0.2", 00:18:21.431 "trsvcid": "4420" 00:18:21.431 }, 00:18:21.431 "peer_address": { 00:18:21.431 "trtype": "TCP", 00:18:21.431 "adrfam": "IPv4", 00:18:21.431 "traddr": "10.0.0.1", 00:18:21.431 "trsvcid": "41226" 00:18:21.431 }, 00:18:21.431 "auth": { 00:18:21.431 "state": "completed", 00:18:21.431 "digest": "sha384", 00:18:21.431 "dhgroup": "ffdhe2048" 00:18:21.431 } 00:18:21.431 } 00:18:21.431 ]' 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.431 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.690 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:21.690 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.265 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.528 00:18:22.528 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.528 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.528 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.787 { 00:18:22.787 "cntlid": 63, 00:18:22.787 "qid": 0, 00:18:22.787 "state": "enabled", 00:18:22.787 "thread": "nvmf_tgt_poll_group_000", 00:18:22.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:22.787 "listen_address": { 00:18:22.787 "trtype": "TCP", 00:18:22.787 "adrfam": "IPv4", 00:18:22.787 "traddr": "10.0.0.2", 00:18:22.787 "trsvcid": "4420" 00:18:22.787 }, 00:18:22.787 "peer_address": { 00:18:22.787 "trtype": "TCP", 00:18:22.787 "adrfam": "IPv4", 00:18:22.787 "traddr": "10.0.0.1", 00:18:22.787 "trsvcid": "41258" 00:18:22.787 }, 00:18:22.787 "auth": { 00:18:22.787 "state": "completed", 00:18:22.787 "digest": "sha384", 00:18:22.787 "dhgroup": "ffdhe2048" 00:18:22.787 } 00:18:22.787 } 00:18:22.787 ]' 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.787 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.048 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:23.048 14:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:23.617 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.617 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:23.617 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.617 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.617 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.617 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.617 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.617 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.618 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.877 00:18:23.877 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.877 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.877 14:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.136 { 00:18:24.136 "cntlid": 65, 00:18:24.136 "qid": 0, 00:18:24.136 "state": "enabled", 00:18:24.136 "thread": "nvmf_tgt_poll_group_000", 00:18:24.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:24.136 "listen_address": { 00:18:24.136 "trtype": "TCP", 00:18:24.136 "adrfam": "IPv4", 00:18:24.136 "traddr": "10.0.0.2", 00:18:24.136 "trsvcid": "4420" 00:18:24.136 }, 00:18:24.136 "peer_address": { 00:18:24.136 "trtype": "TCP", 00:18:24.136 "adrfam": "IPv4", 00:18:24.136 "traddr": "10.0.0.1", 00:18:24.136 "trsvcid": "41290" 00:18:24.136 }, 00:18:24.136 "auth": { 00:18:24.136 "state": "completed", 00:18:24.136 "digest": "sha384", 00:18:24.136 "dhgroup": "ffdhe3072" 00:18:24.136 } 00:18:24.136 } 00:18:24.136 ]' 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.136 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.397 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:24.397 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:24.967 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.967 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:24.967 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.967 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.967 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.967 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.967 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.967 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.967 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:24.967 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.967 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.967 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:24.967 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.967 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.967 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.967 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.967 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.226 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.226 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.226 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.226 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.226 00:18:25.226 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.226 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.226 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.486 { 00:18:25.486 "cntlid": 67, 00:18:25.486 "qid": 0, 00:18:25.486 "state": "enabled", 00:18:25.486 "thread": "nvmf_tgt_poll_group_000", 00:18:25.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:25.486 "listen_address": { 00:18:25.486 "trtype": "TCP", 00:18:25.486 "adrfam": "IPv4", 00:18:25.486 "traddr": "10.0.0.2", 00:18:25.486 "trsvcid": "4420" 00:18:25.486 }, 00:18:25.486 "peer_address": { 00:18:25.486 "trtype": "TCP", 00:18:25.486 "adrfam": "IPv4", 00:18:25.486 "traddr": "10.0.0.1", 00:18:25.486 "trsvcid": "41318" 00:18:25.486 }, 00:18:25.486 "auth": { 00:18:25.486 "state": "completed", 00:18:25.486 "digest": "sha384", 00:18:25.486 "dhgroup": "ffdhe3072" 00:18:25.486 } 00:18:25.486 } 00:18:25.486 ]' 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.486 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.746 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:25.746 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:26.313 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.313 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:26.313 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.313 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.313 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.313 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.313 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.314 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.573 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.834 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.834 { 00:18:26.834 "cntlid": 69, 00:18:26.834 "qid": 0, 00:18:26.834 "state": "enabled", 00:18:26.834 "thread": "nvmf_tgt_poll_group_000", 00:18:26.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:26.834 "listen_address": { 00:18:26.834 "trtype": "TCP", 00:18:26.834 "adrfam": "IPv4", 00:18:26.834 "traddr": "10.0.0.2", 00:18:26.834 "trsvcid": "4420" 00:18:26.834 }, 00:18:26.834 "peer_address": { 00:18:26.834 "trtype": "TCP", 00:18:26.834 "adrfam": "IPv4", 00:18:26.834 "traddr": "10.0.0.1", 00:18:26.834 "trsvcid": "41342" 00:18:26.834 }, 00:18:26.834 "auth": { 00:18:26.834 "state": "completed", 00:18:26.834 "digest": "sha384", 00:18:26.834 "dhgroup": "ffdhe3072" 00:18:26.834 } 00:18:26.834 } 00:18:26.834 ]' 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.834 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.094 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.094 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.094 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.094 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:27.094 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:27.663 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.663 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:27.663 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.663 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.663 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.663 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.663 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.663 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.922 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:27.922 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.922 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.922 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:27.922 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.922 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.922 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:27.922 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.922 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.923 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.923 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.923 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.923 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.182 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.182 { 00:18:28.182 "cntlid": 71, 00:18:28.182 "qid": 0, 00:18:28.182 "state": "enabled", 00:18:28.182 "thread": "nvmf_tgt_poll_group_000", 00:18:28.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:28.182 "listen_address": { 00:18:28.182 "trtype": "TCP", 00:18:28.182 "adrfam": "IPv4", 00:18:28.182 "traddr": "10.0.0.2", 00:18:28.182 "trsvcid": "4420" 00:18:28.182 }, 00:18:28.182 "peer_address": { 00:18:28.182 "trtype": "TCP", 00:18:28.182 "adrfam": "IPv4", 00:18:28.182 "traddr": "10.0.0.1", 00:18:28.182 "trsvcid": "56278" 00:18:28.182 }, 00:18:28.182 "auth": { 00:18:28.182 "state": "completed", 00:18:28.182 "digest": "sha384", 00:18:28.182 "dhgroup": "ffdhe3072" 00:18:28.182 } 00:18:28.182 } 00:18:28.182 ]' 00:18:28.182 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.442 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.442 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.442 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.442 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.442 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.442 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.442 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.442 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:28.442 14:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:29.013 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.013 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:29.013 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.013 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.013 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.013 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.013 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.013 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.013 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.272 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.531 00:18:29.531 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.531 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.531 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.791 { 00:18:29.791 "cntlid": 73, 00:18:29.791 "qid": 0, 00:18:29.791 "state": "enabled", 00:18:29.791 "thread": "nvmf_tgt_poll_group_000", 00:18:29.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:29.791 "listen_address": { 00:18:29.791 "trtype": "TCP", 00:18:29.791 "adrfam": "IPv4", 00:18:29.791 "traddr": "10.0.0.2", 00:18:29.791 "trsvcid": "4420" 00:18:29.791 }, 00:18:29.791 "peer_address": { 00:18:29.791 "trtype": "TCP", 00:18:29.791 "adrfam": "IPv4", 00:18:29.791 "traddr": "10.0.0.1", 00:18:29.791 "trsvcid": "56308" 00:18:29.791 }, 00:18:29.791 "auth": { 00:18:29.791 "state": "completed", 00:18:29.791 "digest": "sha384", 00:18:29.791 "dhgroup": "ffdhe4096" 00:18:29.791 } 00:18:29.791 } 00:18:29.791 ]' 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.791 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.050 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:30.050 14:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.619 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.620 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.620 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.620 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.620 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.620 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.620 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.620 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.879 00:18:30.879 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.879 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.879 14:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.138 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.138 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.138 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.139 { 00:18:31.139 "cntlid": 75, 00:18:31.139 "qid": 0, 00:18:31.139 "state": "enabled", 00:18:31.139 "thread": "nvmf_tgt_poll_group_000", 00:18:31.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:31.139 "listen_address": { 00:18:31.139 "trtype": "TCP", 00:18:31.139 "adrfam": "IPv4", 00:18:31.139 "traddr": "10.0.0.2", 00:18:31.139 "trsvcid": "4420" 00:18:31.139 }, 00:18:31.139 "peer_address": { 00:18:31.139 "trtype": "TCP", 00:18:31.139 "adrfam": "IPv4", 00:18:31.139 "traddr": "10.0.0.1", 00:18:31.139 "trsvcid": "56346" 00:18:31.139 }, 00:18:31.139 "auth": { 00:18:31.139 "state": "completed", 00:18:31.139 "digest": "sha384", 00:18:31.139 "dhgroup": "ffdhe4096" 00:18:31.139 } 00:18:31.139 } 00:18:31.139 ]' 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.139 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.399 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:31.399 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:31.969 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.969 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:31.969 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.969 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.969 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.969 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.969 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.969 14:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.230 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.230 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.491 { 00:18:32.491 "cntlid": 77, 00:18:32.491 "qid": 0, 00:18:32.491 "state": "enabled", 00:18:32.491 "thread": "nvmf_tgt_poll_group_000", 00:18:32.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:32.491 "listen_address": { 00:18:32.491 "trtype": "TCP", 00:18:32.491 "adrfam": "IPv4", 00:18:32.491 "traddr": "10.0.0.2", 00:18:32.491 "trsvcid": "4420" 00:18:32.491 }, 00:18:32.491 "peer_address": { 00:18:32.491 "trtype": "TCP", 00:18:32.491 "adrfam": "IPv4", 00:18:32.491 "traddr": "10.0.0.1", 00:18:32.491 "trsvcid": "56378" 00:18:32.491 }, 00:18:32.491 "auth": { 00:18:32.491 "state": "completed", 00:18:32.491 "digest": "sha384", 00:18:32.491 "dhgroup": "ffdhe4096" 00:18:32.491 } 00:18:32.491 } 00:18:32.491 ]' 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.491 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.752 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.752 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.752 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.752 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:32.752 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:33.322 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.322 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:33.322 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.322 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.322 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.322 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.322 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.322 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.581 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.582 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.582 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.582 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.841 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.841 { 00:18:33.841 "cntlid": 79, 00:18:33.841 "qid": 0, 00:18:33.841 "state": "enabled", 00:18:33.841 "thread": "nvmf_tgt_poll_group_000", 00:18:33.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:33.841 "listen_address": { 00:18:33.841 "trtype": "TCP", 00:18:33.841 "adrfam": "IPv4", 00:18:33.841 "traddr": "10.0.0.2", 00:18:33.841 "trsvcid": "4420" 00:18:33.841 }, 00:18:33.841 "peer_address": { 00:18:33.841 "trtype": "TCP", 00:18:33.841 "adrfam": "IPv4", 00:18:33.841 "traddr": "10.0.0.1", 00:18:33.841 "trsvcid": "56406" 00:18:33.841 }, 00:18:33.841 "auth": { 00:18:33.841 "state": "completed", 00:18:33.841 "digest": "sha384", 00:18:33.841 "dhgroup": "ffdhe4096" 00:18:33.841 } 00:18:33.841 } 00:18:33.841 ]' 00:18:33.841 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.100 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.100 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.100 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.100 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.100 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.100 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.100 14:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.100 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:34.100 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:34.668 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.927 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:34.927 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.927 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.927 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.928 14:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.496 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.496 { 00:18:35.496 "cntlid": 81, 00:18:35.496 "qid": 0, 00:18:35.496 "state": "enabled", 00:18:35.496 "thread": "nvmf_tgt_poll_group_000", 00:18:35.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:35.496 "listen_address": { 00:18:35.496 "trtype": "TCP", 00:18:35.496 "adrfam": "IPv4", 00:18:35.496 "traddr": "10.0.0.2", 00:18:35.496 "trsvcid": "4420" 00:18:35.496 }, 00:18:35.496 "peer_address": { 00:18:35.496 "trtype": "TCP", 00:18:35.496 "adrfam": "IPv4", 00:18:35.496 "traddr": "10.0.0.1", 00:18:35.496 "trsvcid": "56428" 00:18:35.496 }, 00:18:35.496 "auth": { 00:18:35.496 "state": "completed", 00:18:35.496 "digest": "sha384", 00:18:35.496 "dhgroup": "ffdhe6144" 00:18:35.496 } 00:18:35.496 } 00:18:35.496 ]' 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.496 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.755 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:35.755 14:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:36.324 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.324 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:36.324 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.324 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.324 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.324 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.324 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.324 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.582 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.839 00:18:36.839 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.839 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.839 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.097 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.097 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.097 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.097 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.097 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.097 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.097 { 00:18:37.097 "cntlid": 83, 00:18:37.097 "qid": 0, 00:18:37.097 "state": "enabled", 00:18:37.097 "thread": "nvmf_tgt_poll_group_000", 00:18:37.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:37.097 "listen_address": { 00:18:37.097 "trtype": "TCP", 00:18:37.097 "adrfam": "IPv4", 00:18:37.097 "traddr": "10.0.0.2", 00:18:37.097 "trsvcid": "4420" 00:18:37.097 }, 00:18:37.097 "peer_address": { 00:18:37.097 "trtype": "TCP", 00:18:37.097 "adrfam": "IPv4", 00:18:37.097 "traddr": "10.0.0.1", 00:18:37.097 "trsvcid": "56444" 00:18:37.097 }, 00:18:37.097 "auth": { 00:18:37.097 "state": "completed", 00:18:37.097 "digest": "sha384", 00:18:37.097 "dhgroup": "ffdhe6144" 00:18:37.097 } 00:18:37.097 } 00:18:37.097 ]' 00:18:37.097 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.097 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.097 14:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.097 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.097 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.097 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.097 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.097 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.355 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:37.355 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.922 14:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.492 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.492 { 00:18:38.492 "cntlid": 85, 00:18:38.492 "qid": 0, 00:18:38.492 "state": "enabled", 00:18:38.492 "thread": "nvmf_tgt_poll_group_000", 00:18:38.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:38.492 "listen_address": { 00:18:38.492 "trtype": "TCP", 00:18:38.492 "adrfam": "IPv4", 00:18:38.492 "traddr": "10.0.0.2", 00:18:38.492 "trsvcid": "4420" 00:18:38.492 }, 00:18:38.492 "peer_address": { 00:18:38.492 "trtype": "TCP", 00:18:38.492 "adrfam": "IPv4", 00:18:38.492 "traddr": "10.0.0.1", 00:18:38.492 "trsvcid": "44532" 00:18:38.492 }, 00:18:38.492 "auth": { 00:18:38.492 "state": "completed", 00:18:38.492 "digest": "sha384", 00:18:38.492 "dhgroup": "ffdhe6144" 00:18:38.492 } 00:18:38.492 } 00:18:38.492 ]' 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.492 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.751 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:38.751 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:39.321 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.321 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:39.321 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.321 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.321 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.321 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.321 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:39.321 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.580 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.839 00:18:39.839 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.839 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.839 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.099 { 00:18:40.099 "cntlid": 87, 00:18:40.099 "qid": 0, 00:18:40.099 "state": "enabled", 00:18:40.099 "thread": "nvmf_tgt_poll_group_000", 00:18:40.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:40.099 "listen_address": { 00:18:40.099 "trtype": "TCP", 00:18:40.099 "adrfam": "IPv4", 00:18:40.099 "traddr": "10.0.0.2", 00:18:40.099 "trsvcid": "4420" 00:18:40.099 }, 00:18:40.099 "peer_address": { 00:18:40.099 "trtype": "TCP", 00:18:40.099 "adrfam": "IPv4", 00:18:40.099 "traddr": "10.0.0.1", 00:18:40.099 "trsvcid": "44560" 00:18:40.099 }, 00:18:40.099 "auth": { 00:18:40.099 "state": "completed", 00:18:40.099 "digest": "sha384", 00:18:40.099 "dhgroup": "ffdhe6144" 00:18:40.099 } 00:18:40.099 } 00:18:40.099 ]' 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.099 14:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.099 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.099 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.099 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.357 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:40.357 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.924 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:40.925 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:40.925 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.925 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.925 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.925 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.925 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.925 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.925 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.925 14:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.492 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.492 { 00:18:41.492 "cntlid": 89, 00:18:41.492 "qid": 0, 00:18:41.492 "state": "enabled", 00:18:41.492 "thread": "nvmf_tgt_poll_group_000", 00:18:41.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:41.492 "listen_address": { 00:18:41.492 "trtype": "TCP", 00:18:41.492 "adrfam": "IPv4", 00:18:41.492 "traddr": "10.0.0.2", 00:18:41.492 "trsvcid": "4420" 00:18:41.492 }, 00:18:41.492 "peer_address": { 00:18:41.492 "trtype": "TCP", 00:18:41.492 "adrfam": "IPv4", 00:18:41.492 "traddr": "10.0.0.1", 00:18:41.492 "trsvcid": "44594" 00:18:41.492 }, 00:18:41.492 "auth": { 00:18:41.492 "state": "completed", 00:18:41.492 "digest": "sha384", 00:18:41.492 "dhgroup": "ffdhe8192" 00:18:41.492 } 00:18:41.492 } 00:18:41.492 ]' 00:18:41.492 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.751 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.751 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.751 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:41.751 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.751 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.751 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.751 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.751 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:41.752 14:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:42.318 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.318 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:42.318 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.318 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.318 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.318 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.318 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:42.318 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.575 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.142 00:18:43.142 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.142 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.142 14:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.142 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.142 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.142 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.142 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.142 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.142 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.142 { 00:18:43.142 "cntlid": 91, 00:18:43.142 "qid": 0, 00:18:43.142 "state": "enabled", 00:18:43.142 "thread": "nvmf_tgt_poll_group_000", 00:18:43.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:43.142 "listen_address": { 00:18:43.142 "trtype": "TCP", 00:18:43.142 "adrfam": "IPv4", 00:18:43.142 "traddr": "10.0.0.2", 00:18:43.142 "trsvcid": "4420" 00:18:43.142 }, 00:18:43.142 "peer_address": { 00:18:43.142 "trtype": "TCP", 00:18:43.142 "adrfam": "IPv4", 00:18:43.142 "traddr": "10.0.0.1", 00:18:43.142 "trsvcid": "44618" 00:18:43.142 }, 00:18:43.142 "auth": { 00:18:43.142 "state": "completed", 00:18:43.142 "digest": "sha384", 00:18:43.142 "dhgroup": "ffdhe8192" 00:18:43.142 } 00:18:43.142 } 00:18:43.142 ]' 00:18:43.142 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.142 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.142 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.401 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.401 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.401 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.401 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.401 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.401 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:43.401 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:43.969 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.228 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.796 00:18:44.796 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.796 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.796 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.056 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.056 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.057 { 00:18:45.057 "cntlid": 93, 00:18:45.057 "qid": 0, 00:18:45.057 "state": "enabled", 00:18:45.057 "thread": "nvmf_tgt_poll_group_000", 00:18:45.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:45.057 "listen_address": { 00:18:45.057 "trtype": "TCP", 00:18:45.057 "adrfam": "IPv4", 00:18:45.057 "traddr": "10.0.0.2", 00:18:45.057 "trsvcid": "4420" 00:18:45.057 }, 00:18:45.057 "peer_address": { 00:18:45.057 "trtype": "TCP", 00:18:45.057 "adrfam": "IPv4", 00:18:45.057 "traddr": "10.0.0.1", 00:18:45.057 "trsvcid": "44646" 00:18:45.057 }, 00:18:45.057 "auth": { 00:18:45.057 "state": "completed", 00:18:45.057 "digest": "sha384", 00:18:45.057 "dhgroup": "ffdhe8192" 00:18:45.057 } 00:18:45.057 } 00:18:45.057 ]' 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.057 14:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.317 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:45.317 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.885 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.886 14:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.453 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.453 { 00:18:46.453 "cntlid": 95, 00:18:46.453 "qid": 0, 00:18:46.453 "state": "enabled", 00:18:46.453 "thread": "nvmf_tgt_poll_group_000", 00:18:46.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:46.453 "listen_address": { 00:18:46.453 "trtype": "TCP", 00:18:46.453 "adrfam": "IPv4", 00:18:46.453 "traddr": "10.0.0.2", 00:18:46.453 "trsvcid": "4420" 00:18:46.453 }, 00:18:46.453 "peer_address": { 00:18:46.453 "trtype": "TCP", 00:18:46.453 "adrfam": "IPv4", 00:18:46.453 "traddr": "10.0.0.1", 00:18:46.453 "trsvcid": "44664" 00:18:46.453 }, 00:18:46.453 "auth": { 00:18:46.453 "state": "completed", 00:18:46.453 "digest": "sha384", 00:18:46.453 "dhgroup": "ffdhe8192" 00:18:46.453 } 00:18:46.453 } 00:18:46.453 ]' 00:18:46.453 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.712 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.712 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.712 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.712 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.712 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.712 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.712 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.712 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:46.712 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:47.280 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.541 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.801 00:18:47.801 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.801 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.801 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.061 { 00:18:48.061 "cntlid": 97, 00:18:48.061 "qid": 0, 00:18:48.061 "state": "enabled", 00:18:48.061 "thread": "nvmf_tgt_poll_group_000", 00:18:48.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:48.061 "listen_address": { 00:18:48.061 "trtype": "TCP", 00:18:48.061 "adrfam": "IPv4", 00:18:48.061 "traddr": "10.0.0.2", 00:18:48.061 "trsvcid": "4420" 00:18:48.061 }, 00:18:48.061 "peer_address": { 00:18:48.061 "trtype": "TCP", 00:18:48.061 "adrfam": "IPv4", 00:18:48.061 "traddr": "10.0.0.1", 00:18:48.061 "trsvcid": "57528" 00:18:48.061 }, 00:18:48.061 "auth": { 00:18:48.061 "state": "completed", 00:18:48.061 "digest": "sha512", 00:18:48.061 "dhgroup": "null" 00:18:48.061 } 00:18:48.061 } 00:18:48.061 ]' 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.061 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.320 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:48.320 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.889 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.149 00:18:49.149 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.149 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.149 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.409 { 00:18:49.409 "cntlid": 99, 00:18:49.409 "qid": 0, 00:18:49.409 "state": "enabled", 00:18:49.409 "thread": "nvmf_tgt_poll_group_000", 00:18:49.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:49.409 "listen_address": { 00:18:49.409 "trtype": "TCP", 00:18:49.409 "adrfam": "IPv4", 00:18:49.409 "traddr": "10.0.0.2", 00:18:49.409 "trsvcid": "4420" 00:18:49.409 }, 00:18:49.409 "peer_address": { 00:18:49.409 "trtype": "TCP", 00:18:49.409 "adrfam": "IPv4", 00:18:49.409 "traddr": "10.0.0.1", 00:18:49.409 "trsvcid": "57566" 00:18:49.409 }, 00:18:49.409 "auth": { 00:18:49.409 "state": "completed", 00:18:49.409 "digest": "sha512", 00:18:49.409 "dhgroup": "null" 00:18:49.409 } 00:18:49.409 } 00:18:49.409 ]' 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.409 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.669 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:49.669 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.238 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.498 00:18:50.498 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.498 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.498 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.757 { 00:18:50.757 "cntlid": 101, 00:18:50.757 "qid": 0, 00:18:50.757 "state": "enabled", 00:18:50.757 "thread": "nvmf_tgt_poll_group_000", 00:18:50.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:50.757 "listen_address": { 00:18:50.757 "trtype": "TCP", 00:18:50.757 "adrfam": "IPv4", 00:18:50.757 "traddr": "10.0.0.2", 00:18:50.757 "trsvcid": "4420" 00:18:50.757 }, 00:18:50.757 "peer_address": { 00:18:50.757 "trtype": "TCP", 00:18:50.757 "adrfam": "IPv4", 00:18:50.757 "traddr": "10.0.0.1", 00:18:50.757 "trsvcid": "57588" 00:18:50.757 }, 00:18:50.757 "auth": { 00:18:50.757 "state": "completed", 00:18:50.757 "digest": "sha512", 00:18:50.757 "dhgroup": "null" 00:18:50.757 } 00:18:50.757 } 00:18:50.757 ]' 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.757 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.017 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:51.017 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.586 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.898 00:18:51.898 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.898 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.898 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.216 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.216 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.216 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.216 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.216 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.216 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.216 { 00:18:52.216 "cntlid": 103, 00:18:52.216 "qid": 0, 00:18:52.216 "state": "enabled", 00:18:52.216 "thread": "nvmf_tgt_poll_group_000", 00:18:52.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:52.216 "listen_address": { 00:18:52.216 "trtype": "TCP", 00:18:52.216 "adrfam": "IPv4", 00:18:52.216 "traddr": "10.0.0.2", 00:18:52.216 "trsvcid": "4420" 00:18:52.216 }, 00:18:52.216 "peer_address": { 00:18:52.216 "trtype": "TCP", 00:18:52.216 "adrfam": "IPv4", 00:18:52.216 "traddr": "10.0.0.1", 00:18:52.216 "trsvcid": "57616" 00:18:52.216 }, 00:18:52.216 "auth": { 00:18:52.216 "state": "completed", 00:18:52.216 "digest": "sha512", 00:18:52.216 "dhgroup": "null" 00:18:52.216 } 00:18:52.216 } 00:18:52.216 ]' 00:18:52.216 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.216 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.216 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.216 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:52.216 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.216 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.216 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.216 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.216 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:52.216 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:52.785 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.785 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:52.785 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.785 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.785 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.785 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.785 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.785 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.785 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.044 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.303 00:18:53.303 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.303 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.303 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.303 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.303 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.303 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.303 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.303 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.303 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.303 { 00:18:53.303 "cntlid": 105, 00:18:53.303 "qid": 0, 00:18:53.303 "state": "enabled", 00:18:53.303 "thread": "nvmf_tgt_poll_group_000", 00:18:53.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:53.304 "listen_address": { 00:18:53.304 "trtype": "TCP", 00:18:53.304 "adrfam": "IPv4", 00:18:53.304 "traddr": "10.0.0.2", 00:18:53.304 "trsvcid": "4420" 00:18:53.304 }, 00:18:53.304 "peer_address": { 00:18:53.304 "trtype": "TCP", 00:18:53.304 "adrfam": "IPv4", 00:18:53.304 "traddr": "10.0.0.1", 00:18:53.304 "trsvcid": "57646" 00:18:53.304 }, 00:18:53.304 "auth": { 00:18:53.304 "state": "completed", 00:18:53.304 "digest": "sha512", 00:18:53.304 "dhgroup": "ffdhe2048" 00:18:53.304 } 00:18:53.304 } 00:18:53.304 ]' 00:18:53.304 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.563 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.563 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.563 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.563 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.563 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.563 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.563 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.563 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:53.563 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:54.132 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.132 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:54.132 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.132 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.132 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.132 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.132 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:54.132 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.393 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.651 00:18:54.651 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.651 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.651 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.651 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.651 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.651 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.651 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.910 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.910 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.910 { 00:18:54.910 "cntlid": 107, 00:18:54.910 "qid": 0, 00:18:54.910 "state": "enabled", 00:18:54.910 "thread": "nvmf_tgt_poll_group_000", 00:18:54.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:54.910 "listen_address": { 00:18:54.910 "trtype": "TCP", 00:18:54.910 "adrfam": "IPv4", 00:18:54.910 "traddr": "10.0.0.2", 00:18:54.910 "trsvcid": "4420" 00:18:54.910 }, 00:18:54.910 "peer_address": { 00:18:54.910 "trtype": "TCP", 00:18:54.910 "adrfam": "IPv4", 00:18:54.910 "traddr": "10.0.0.1", 00:18:54.910 "trsvcid": "57676" 00:18:54.911 }, 00:18:54.911 "auth": { 00:18:54.911 "state": "completed", 00:18:54.911 "digest": "sha512", 00:18:54.911 "dhgroup": "ffdhe2048" 00:18:54.911 } 00:18:54.911 } 00:18:54.911 ]' 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:54.911 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:18:55.480 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.480 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:55.480 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.480 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.480 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.480 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.480 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:55.480 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.739 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.999 00:18:55.999 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.999 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.999 14:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.258 { 00:18:56.258 "cntlid": 109, 00:18:56.258 "qid": 0, 00:18:56.258 "state": "enabled", 00:18:56.258 "thread": "nvmf_tgt_poll_group_000", 00:18:56.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:56.258 "listen_address": { 00:18:56.258 "trtype": "TCP", 00:18:56.258 "adrfam": "IPv4", 00:18:56.258 "traddr": "10.0.0.2", 00:18:56.258 "trsvcid": "4420" 00:18:56.258 }, 00:18:56.258 "peer_address": { 00:18:56.258 "trtype": "TCP", 00:18:56.258 "adrfam": "IPv4", 00:18:56.258 "traddr": "10.0.0.1", 00:18:56.258 "trsvcid": "57716" 00:18:56.258 }, 00:18:56.258 "auth": { 00:18:56.258 "state": "completed", 00:18:56.258 "digest": "sha512", 00:18:56.258 "dhgroup": "ffdhe2048" 00:18:56.258 } 00:18:56.258 } 00:18:56.258 ]' 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.258 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.518 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:56.518 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:18:57.088 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.088 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:57.088 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.088 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.088 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.088 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.088 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.088 14:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.088 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:57.088 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.088 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.088 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:57.088 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:57.088 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.088 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:57.089 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.089 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.089 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.089 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.089 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.089 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.349 00:18:57.349 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.349 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.349 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.608 { 00:18:57.608 "cntlid": 111, 00:18:57.608 "qid": 0, 00:18:57.608 "state": "enabled", 00:18:57.608 "thread": "nvmf_tgt_poll_group_000", 00:18:57.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:57.608 "listen_address": { 00:18:57.608 "trtype": "TCP", 00:18:57.608 "adrfam": "IPv4", 00:18:57.608 "traddr": "10.0.0.2", 00:18:57.608 "trsvcid": "4420" 00:18:57.608 }, 00:18:57.608 "peer_address": { 00:18:57.608 "trtype": "TCP", 00:18:57.608 "adrfam": "IPv4", 00:18:57.608 "traddr": "10.0.0.1", 00:18:57.608 "trsvcid": "39160" 00:18:57.608 }, 00:18:57.608 "auth": { 00:18:57.608 "state": "completed", 00:18:57.608 "digest": "sha512", 00:18:57.608 "dhgroup": "ffdhe2048" 00:18:57.608 } 00:18:57.608 } 00:18:57.608 ]' 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.608 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.868 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:57.868 14:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:18:58.435 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.435 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:58.435 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.436 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.694 00:18:58.694 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.694 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.694 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.953 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.953 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.953 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.953 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.953 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.953 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.953 { 00:18:58.953 "cntlid": 113, 00:18:58.953 "qid": 0, 00:18:58.953 "state": "enabled", 00:18:58.953 "thread": "nvmf_tgt_poll_group_000", 00:18:58.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:18:58.953 "listen_address": { 00:18:58.953 "trtype": "TCP", 00:18:58.953 "adrfam": "IPv4", 00:18:58.953 "traddr": "10.0.0.2", 00:18:58.953 "trsvcid": "4420" 00:18:58.953 }, 00:18:58.953 "peer_address": { 00:18:58.953 "trtype": "TCP", 00:18:58.953 "adrfam": "IPv4", 00:18:58.953 "traddr": "10.0.0.1", 00:18:58.953 "trsvcid": "39174" 00:18:58.953 }, 00:18:58.954 "auth": { 00:18:58.954 "state": "completed", 00:18:58.954 "digest": "sha512", 00:18:58.954 "dhgroup": "ffdhe3072" 00:18:58.954 } 00:18:58.954 } 00:18:58.954 ]' 00:18:58.954 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.954 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.954 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.954 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.954 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.954 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.954 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.954 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.213 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:59.213 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.782 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.041 00:19:00.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.301 { 00:19:00.301 "cntlid": 115, 00:19:00.301 "qid": 0, 00:19:00.301 "state": "enabled", 00:19:00.301 "thread": "nvmf_tgt_poll_group_000", 00:19:00.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:00.301 "listen_address": { 00:19:00.301 "trtype": "TCP", 00:19:00.301 "adrfam": "IPv4", 00:19:00.301 "traddr": "10.0.0.2", 00:19:00.301 "trsvcid": "4420" 00:19:00.301 }, 00:19:00.301 "peer_address": { 00:19:00.301 "trtype": "TCP", 00:19:00.301 "adrfam": "IPv4", 00:19:00.301 "traddr": "10.0.0.1", 00:19:00.301 "trsvcid": "39208" 00:19:00.301 }, 00:19:00.301 "auth": { 00:19:00.301 "state": "completed", 00:19:00.301 "digest": "sha512", 00:19:00.301 "dhgroup": "ffdhe3072" 00:19:00.301 } 00:19:00.301 } 00:19:00.301 ]' 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.301 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.560 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:19:00.560 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:19:01.127 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.127 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:01.127 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.127 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.127 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.127 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.127 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:01.127 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.385 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.385 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.644 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.644 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.645 { 00:19:01.645 "cntlid": 117, 00:19:01.645 "qid": 0, 00:19:01.645 "state": "enabled", 00:19:01.645 "thread": "nvmf_tgt_poll_group_000", 00:19:01.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:01.645 "listen_address": { 00:19:01.645 "trtype": "TCP", 00:19:01.645 "adrfam": "IPv4", 00:19:01.645 "traddr": "10.0.0.2", 00:19:01.645 "trsvcid": "4420" 00:19:01.645 }, 00:19:01.645 "peer_address": { 00:19:01.645 "trtype": "TCP", 00:19:01.645 "adrfam": "IPv4", 00:19:01.645 "traddr": "10.0.0.1", 00:19:01.645 "trsvcid": "39232" 00:19:01.645 }, 00:19:01.645 "auth": { 00:19:01.645 "state": "completed", 00:19:01.645 "digest": "sha512", 00:19:01.645 "dhgroup": "ffdhe3072" 00:19:01.645 } 00:19:01.645 } 00:19:01.645 ]' 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.645 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.903 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:19:01.903 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:19:02.467 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.467 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:02.467 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.467 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.467 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.467 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.467 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:02.467 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.725 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.984 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.984 { 00:19:02.984 "cntlid": 119, 00:19:02.984 "qid": 0, 00:19:02.984 "state": "enabled", 00:19:02.984 "thread": "nvmf_tgt_poll_group_000", 00:19:02.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:02.984 "listen_address": { 00:19:02.984 "trtype": "TCP", 00:19:02.984 "adrfam": "IPv4", 00:19:02.984 "traddr": "10.0.0.2", 00:19:02.984 "trsvcid": "4420" 00:19:02.984 }, 00:19:02.984 "peer_address": { 00:19:02.984 "trtype": "TCP", 00:19:02.984 "adrfam": "IPv4", 00:19:02.984 "traddr": "10.0.0.1", 00:19:02.984 "trsvcid": "39262" 00:19:02.984 }, 00:19:02.984 "auth": { 00:19:02.984 "state": "completed", 00:19:02.984 "digest": "sha512", 00:19:02.984 "dhgroup": "ffdhe3072" 00:19:02.984 } 00:19:02.984 } 00:19:02.984 ]' 00:19:02.984 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.984 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.984 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.984 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.984 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.243 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.243 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.243 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.243 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:03.243 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:03.811 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.812 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:03.812 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.812 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.812 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.812 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.812 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.812 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:03.812 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.070 14:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.330 00:19:04.330 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.330 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.330 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.330 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.330 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.330 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.330 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.589 { 00:19:04.589 "cntlid": 121, 00:19:04.589 "qid": 0, 00:19:04.589 "state": "enabled", 00:19:04.589 "thread": "nvmf_tgt_poll_group_000", 00:19:04.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:04.589 "listen_address": { 00:19:04.589 "trtype": "TCP", 00:19:04.589 "adrfam": "IPv4", 00:19:04.589 "traddr": "10.0.0.2", 00:19:04.589 "trsvcid": "4420" 00:19:04.589 }, 00:19:04.589 "peer_address": { 00:19:04.589 "trtype": "TCP", 00:19:04.589 "adrfam": "IPv4", 00:19:04.589 "traddr": "10.0.0.1", 00:19:04.589 "trsvcid": "39276" 00:19:04.589 }, 00:19:04.589 "auth": { 00:19:04.589 "state": "completed", 00:19:04.589 "digest": "sha512", 00:19:04.589 "dhgroup": "ffdhe4096" 00:19:04.589 } 00:19:04.589 } 00:19:04.589 ]' 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.589 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.590 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:19:04.590 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:19:05.157 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.157 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:05.157 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.157 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.415 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.415 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.415 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:05.415 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:05.415 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:05.415 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.416 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.674 00:19:05.674 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.674 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.674 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.932 { 00:19:05.932 "cntlid": 123, 00:19:05.932 "qid": 0, 00:19:05.932 "state": "enabled", 00:19:05.932 "thread": "nvmf_tgt_poll_group_000", 00:19:05.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:05.932 "listen_address": { 00:19:05.932 "trtype": "TCP", 00:19:05.932 "adrfam": "IPv4", 00:19:05.932 "traddr": "10.0.0.2", 00:19:05.932 "trsvcid": "4420" 00:19:05.932 }, 00:19:05.932 "peer_address": { 00:19:05.932 "trtype": "TCP", 00:19:05.932 "adrfam": "IPv4", 00:19:05.932 "traddr": "10.0.0.1", 00:19:05.932 "trsvcid": "39286" 00:19:05.932 }, 00:19:05.932 "auth": { 00:19:05.932 "state": "completed", 00:19:05.932 "digest": "sha512", 00:19:05.932 "dhgroup": "ffdhe4096" 00:19:05.932 } 00:19:05.932 } 00:19:05.932 ]' 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.932 14:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.191 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:19:06.192 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:19:06.759 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.759 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:06.759 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.759 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.759 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.759 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.759 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:06.759 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.017 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.276 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.276 { 00:19:07.276 "cntlid": 125, 00:19:07.276 "qid": 0, 00:19:07.276 "state": "enabled", 00:19:07.276 "thread": "nvmf_tgt_poll_group_000", 00:19:07.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:07.276 "listen_address": { 00:19:07.276 "trtype": "TCP", 00:19:07.276 "adrfam": "IPv4", 00:19:07.276 "traddr": "10.0.0.2", 00:19:07.276 "trsvcid": "4420" 00:19:07.276 }, 00:19:07.276 "peer_address": { 00:19:07.276 "trtype": "TCP", 00:19:07.276 "adrfam": "IPv4", 00:19:07.276 "traddr": "10.0.0.1", 00:19:07.276 "trsvcid": "47416" 00:19:07.276 }, 00:19:07.276 "auth": { 00:19:07.276 "state": "completed", 00:19:07.276 "digest": "sha512", 00:19:07.276 "dhgroup": "ffdhe4096" 00:19:07.276 } 00:19:07.276 } 00:19:07.276 ]' 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.276 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.535 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.535 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.535 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.535 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.535 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.535 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:19:07.535 14:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:19:08.101 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.101 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:08.101 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.101 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.101 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.101 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.101 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:08.101 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.360 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.361 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.620 00:19:08.620 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.620 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.620 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.620 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.620 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.620 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.620 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.879 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.879 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.879 { 00:19:08.879 "cntlid": 127, 00:19:08.879 "qid": 0, 00:19:08.879 "state": "enabled", 00:19:08.879 "thread": "nvmf_tgt_poll_group_000", 00:19:08.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:08.879 "listen_address": { 00:19:08.879 "trtype": "TCP", 00:19:08.879 "adrfam": "IPv4", 00:19:08.879 "traddr": "10.0.0.2", 00:19:08.879 "trsvcid": "4420" 00:19:08.879 }, 00:19:08.879 "peer_address": { 00:19:08.879 "trtype": "TCP", 00:19:08.879 "adrfam": "IPv4", 00:19:08.879 "traddr": "10.0.0.1", 00:19:08.879 "trsvcid": "47454" 00:19:08.879 }, 00:19:08.879 "auth": { 00:19:08.879 "state": "completed", 00:19:08.879 "digest": "sha512", 00:19:08.879 "dhgroup": "ffdhe4096" 00:19:08.879 } 00:19:08.879 } 00:19:08.879 ]' 00:19:08.879 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.879 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.879 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.879 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.879 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.879 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.880 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.880 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.880 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:08.880 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:09.447 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.448 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:09.448 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.448 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.448 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.448 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.448 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.448 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:09.448 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.707 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.966 00:19:09.966 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.966 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.966 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.224 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.224 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.224 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.224 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.224 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.224 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.224 { 00:19:10.224 "cntlid": 129, 00:19:10.224 "qid": 0, 00:19:10.224 "state": "enabled", 00:19:10.224 "thread": "nvmf_tgt_poll_group_000", 00:19:10.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:10.224 "listen_address": { 00:19:10.224 "trtype": "TCP", 00:19:10.224 "adrfam": "IPv4", 00:19:10.224 "traddr": "10.0.0.2", 00:19:10.224 "trsvcid": "4420" 00:19:10.224 }, 00:19:10.224 "peer_address": { 00:19:10.224 "trtype": "TCP", 00:19:10.224 "adrfam": "IPv4", 00:19:10.224 "traddr": "10.0.0.1", 00:19:10.224 "trsvcid": "47492" 00:19:10.224 }, 00:19:10.224 "auth": { 00:19:10.224 "state": "completed", 00:19:10.224 "digest": "sha512", 00:19:10.224 "dhgroup": "ffdhe6144" 00:19:10.224 } 00:19:10.224 } 00:19:10.224 ]' 00:19:10.225 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.225 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.225 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.225 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.225 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.225 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.225 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.225 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.483 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:19:10.483 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:19:11.053 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.053 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:11.053 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.053 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.053 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.053 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.053 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:11.053 14:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.053 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.312 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.572 { 00:19:11.572 "cntlid": 131, 00:19:11.572 "qid": 0, 00:19:11.572 "state": "enabled", 00:19:11.572 "thread": "nvmf_tgt_poll_group_000", 00:19:11.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:11.572 "listen_address": { 00:19:11.572 "trtype": "TCP", 00:19:11.572 "adrfam": "IPv4", 00:19:11.572 "traddr": "10.0.0.2", 00:19:11.572 "trsvcid": "4420" 00:19:11.572 }, 00:19:11.572 "peer_address": { 00:19:11.572 "trtype": "TCP", 00:19:11.572 "adrfam": "IPv4", 00:19:11.572 "traddr": "10.0.0.1", 00:19:11.572 "trsvcid": "47524" 00:19:11.572 }, 00:19:11.572 "auth": { 00:19:11.572 "state": "completed", 00:19:11.572 "digest": "sha512", 00:19:11.572 "dhgroup": "ffdhe6144" 00:19:11.572 } 00:19:11.572 } 00:19:11.572 ]' 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.572 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.830 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:19:11.830 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:19:12.398 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.398 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:12.398 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.398 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.398 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.398 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.398 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.398 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.657 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.917 00:19:12.917 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.917 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.917 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.175 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.175 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.175 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.175 14:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.175 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.175 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.175 { 00:19:13.175 "cntlid": 133, 00:19:13.175 "qid": 0, 00:19:13.175 "state": "enabled", 00:19:13.175 "thread": "nvmf_tgt_poll_group_000", 00:19:13.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:13.175 "listen_address": { 00:19:13.175 "trtype": "TCP", 00:19:13.175 "adrfam": "IPv4", 00:19:13.175 "traddr": "10.0.0.2", 00:19:13.175 "trsvcid": "4420" 00:19:13.175 }, 00:19:13.175 "peer_address": { 00:19:13.175 "trtype": "TCP", 00:19:13.175 "adrfam": "IPv4", 00:19:13.175 "traddr": "10.0.0.1", 00:19:13.175 "trsvcid": "47550" 00:19:13.175 }, 00:19:13.175 "auth": { 00:19:13.175 "state": "completed", 00:19:13.175 "digest": "sha512", 00:19:13.175 "dhgroup": "ffdhe6144" 00:19:13.175 } 00:19:13.175 } 00:19:13.175 ]' 00:19:13.175 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.175 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.175 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.175 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.175 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.175 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.175 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.176 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.434 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:19:13.434 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.002 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.260 00:19:14.260 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.260 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.260 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.519 { 00:19:14.519 "cntlid": 135, 00:19:14.519 "qid": 0, 00:19:14.519 "state": "enabled", 00:19:14.519 "thread": "nvmf_tgt_poll_group_000", 00:19:14.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:14.519 "listen_address": { 00:19:14.519 "trtype": "TCP", 00:19:14.519 "adrfam": "IPv4", 00:19:14.519 "traddr": "10.0.0.2", 00:19:14.519 "trsvcid": "4420" 00:19:14.519 }, 00:19:14.519 "peer_address": { 00:19:14.519 "trtype": "TCP", 00:19:14.519 "adrfam": "IPv4", 00:19:14.519 "traddr": "10.0.0.1", 00:19:14.519 "trsvcid": "47582" 00:19:14.519 }, 00:19:14.519 "auth": { 00:19:14.519 "state": "completed", 00:19:14.519 "digest": "sha512", 00:19:14.519 "dhgroup": "ffdhe6144" 00:19:14.519 } 00:19:14.519 } 00:19:14.519 ]' 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.519 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.777 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:14.777 14:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:15.353 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.353 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:15.353 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.353 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.353 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.353 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.353 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.353 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.353 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.612 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.613 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.613 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.871 00:19:15.871 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.871 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.871 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.129 { 00:19:16.129 "cntlid": 137, 00:19:16.129 "qid": 0, 00:19:16.129 "state": "enabled", 00:19:16.129 "thread": "nvmf_tgt_poll_group_000", 00:19:16.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:16.129 "listen_address": { 00:19:16.129 "trtype": "TCP", 00:19:16.129 "adrfam": "IPv4", 00:19:16.129 "traddr": "10.0.0.2", 00:19:16.129 "trsvcid": "4420" 00:19:16.129 }, 00:19:16.129 "peer_address": { 00:19:16.129 "trtype": "TCP", 00:19:16.129 "adrfam": "IPv4", 00:19:16.129 "traddr": "10.0.0.1", 00:19:16.129 "trsvcid": "47614" 00:19:16.129 }, 00:19:16.129 "auth": { 00:19:16.129 "state": "completed", 00:19:16.129 "digest": "sha512", 00:19:16.129 "dhgroup": "ffdhe8192" 00:19:16.129 } 00:19:16.129 } 00:19:16.129 ]' 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.129 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.388 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:19:16.388 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:19:16.954 14:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.954 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:16.954 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.954 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.954 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.954 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.954 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:16.954 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.212 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.781 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.781 { 00:19:17.781 "cntlid": 139, 00:19:17.781 "qid": 0, 00:19:17.781 "state": "enabled", 00:19:17.781 "thread": "nvmf_tgt_poll_group_000", 00:19:17.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:17.781 "listen_address": { 00:19:17.781 "trtype": "TCP", 00:19:17.781 "adrfam": "IPv4", 00:19:17.781 "traddr": "10.0.0.2", 00:19:17.781 "trsvcid": "4420" 00:19:17.781 }, 00:19:17.781 "peer_address": { 00:19:17.781 "trtype": "TCP", 00:19:17.781 "adrfam": "IPv4", 00:19:17.781 "traddr": "10.0.0.1", 00:19:17.781 "trsvcid": "50376" 00:19:17.781 }, 00:19:17.781 "auth": { 00:19:17.781 "state": "completed", 00:19:17.781 "digest": "sha512", 00:19:17.781 "dhgroup": "ffdhe8192" 00:19:17.781 } 00:19:17.781 } 00:19:17.781 ]' 00:19:17.781 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.041 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.041 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.041 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.041 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.041 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.041 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.041 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.041 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:19:18.041 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: --dhchap-ctrl-secret DHHC-1:02:MjM1MzJkM2MyZmI5NjFlNjJkM2RjYzAyNWZiMzdlMjdkOGU2OGM0ZjRlYjY4NTEzKr6WOw==: 00:19:18.608 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.608 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:18.609 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.609 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.609 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.609 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.609 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:18.609 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.868 14:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.435 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.435 { 00:19:19.435 "cntlid": 141, 00:19:19.435 "qid": 0, 00:19:19.435 "state": "enabled", 00:19:19.435 "thread": "nvmf_tgt_poll_group_000", 00:19:19.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:19.435 "listen_address": { 00:19:19.435 "trtype": "TCP", 00:19:19.435 "adrfam": "IPv4", 00:19:19.435 "traddr": "10.0.0.2", 00:19:19.435 "trsvcid": "4420" 00:19:19.435 }, 00:19:19.435 "peer_address": { 00:19:19.435 "trtype": "TCP", 00:19:19.435 "adrfam": "IPv4", 00:19:19.435 "traddr": "10.0.0.1", 00:19:19.435 "trsvcid": "50406" 00:19:19.435 }, 00:19:19.435 "auth": { 00:19:19.435 "state": "completed", 00:19:19.435 "digest": "sha512", 00:19:19.435 "dhgroup": "ffdhe8192" 00:19:19.435 } 00:19:19.435 } 00:19:19.435 ]' 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.435 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.694 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.694 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.694 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.694 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:19:19.694 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:01:NzEyMDFmNjI1MGNhNjY2M2ZlNzQ2MWViODBkNzgyMzErEK6U: 00:19:20.262 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.262 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:20.262 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.262 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.262 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.262 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.262 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:20.262 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.521 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.089 00:19:21.089 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.089 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.089 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.089 { 00:19:21.089 "cntlid": 143, 00:19:21.089 "qid": 0, 00:19:21.089 "state": "enabled", 00:19:21.089 "thread": "nvmf_tgt_poll_group_000", 00:19:21.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:21.089 "listen_address": { 00:19:21.089 "trtype": "TCP", 00:19:21.089 "adrfam": "IPv4", 00:19:21.089 "traddr": "10.0.0.2", 00:19:21.089 "trsvcid": "4420" 00:19:21.089 }, 00:19:21.089 "peer_address": { 00:19:21.089 "trtype": "TCP", 00:19:21.089 "adrfam": "IPv4", 00:19:21.089 "traddr": "10.0.0.1", 00:19:21.089 "trsvcid": "50436" 00:19:21.089 }, 00:19:21.089 "auth": { 00:19:21.089 "state": "completed", 00:19:21.089 "digest": "sha512", 00:19:21.089 "dhgroup": "ffdhe8192" 00:19:21.089 } 00:19:21.089 } 00:19:21.089 ]' 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.089 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.348 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:21.348 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.915 14:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.174 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.433 00:19:22.433 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.433 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.433 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.691 { 00:19:22.691 "cntlid": 145, 00:19:22.691 "qid": 0, 00:19:22.691 "state": "enabled", 00:19:22.691 "thread": "nvmf_tgt_poll_group_000", 00:19:22.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:22.691 "listen_address": { 00:19:22.691 "trtype": "TCP", 00:19:22.691 "adrfam": "IPv4", 00:19:22.691 "traddr": "10.0.0.2", 00:19:22.691 "trsvcid": "4420" 00:19:22.691 }, 00:19:22.691 "peer_address": { 00:19:22.691 "trtype": "TCP", 00:19:22.691 "adrfam": "IPv4", 00:19:22.691 "traddr": "10.0.0.1", 00:19:22.691 "trsvcid": "50474" 00:19:22.691 }, 00:19:22.691 "auth": { 00:19:22.691 "state": "completed", 00:19:22.691 "digest": "sha512", 00:19:22.691 "dhgroup": "ffdhe8192" 00:19:22.691 } 00:19:22.691 } 00:19:22.691 ]' 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.691 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.692 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.692 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.950 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:19:22.950 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZGE1NDI0M2I5NDMxYjk4YTVjMTQ2ZTg3MGIxZTA0NDk4ZDA0ODZlNzQ0OTc5MDdiWu/+0w==: --dhchap-ctrl-secret DHHC-1:03:MTk3MGNhZDFlMDg5MDQ4MGJjZDI2YjJmZDkwZWEzNzI2YjM3ZDQ3MzM4ZDhlZDY5NDRjZjg0MDljZGE1NGY4OQ9SlNA=: 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:23.517 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:24.085 request: 00:19:24.085 { 00:19:24.085 "name": "nvme0", 00:19:24.085 "trtype": "tcp", 00:19:24.085 "traddr": "10.0.0.2", 00:19:24.085 "adrfam": "ipv4", 00:19:24.085 "trsvcid": "4420", 00:19:24.085 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:24.085 "prchk_reftag": false, 00:19:24.085 "prchk_guard": false, 00:19:24.085 "hdgst": false, 00:19:24.085 "ddgst": false, 00:19:24.085 "dhchap_key": "key2", 00:19:24.085 "allow_unrecognized_csi": false, 00:19:24.085 "method": "bdev_nvme_attach_controller", 00:19:24.085 "req_id": 1 00:19:24.085 } 00:19:24.085 Got JSON-RPC error response 00:19:24.085 response: 00:19:24.085 { 00:19:24.085 "code": -5, 00:19:24.085 "message": "Input/output error" 00:19:24.085 } 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.085 14:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.346 request: 00:19:24.346 { 00:19:24.346 "name": "nvme0", 00:19:24.346 "trtype": "tcp", 00:19:24.346 "traddr": "10.0.0.2", 00:19:24.346 "adrfam": "ipv4", 00:19:24.346 "trsvcid": "4420", 00:19:24.346 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:24.346 "prchk_reftag": false, 00:19:24.346 "prchk_guard": false, 00:19:24.346 "hdgst": false, 00:19:24.346 "ddgst": false, 00:19:24.346 "dhchap_key": "key1", 00:19:24.346 "dhchap_ctrlr_key": "ckey2", 00:19:24.346 "allow_unrecognized_csi": false, 00:19:24.346 "method": "bdev_nvme_attach_controller", 00:19:24.346 "req_id": 1 00:19:24.346 } 00:19:24.346 Got JSON-RPC error response 00:19:24.346 response: 00:19:24.346 { 00:19:24.346 "code": -5, 00:19:24.346 "message": "Input/output error" 00:19:24.346 } 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.346 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.916 request: 00:19:24.916 { 00:19:24.916 "name": "nvme0", 00:19:24.916 "trtype": "tcp", 00:19:24.916 "traddr": "10.0.0.2", 00:19:24.916 "adrfam": "ipv4", 00:19:24.916 "trsvcid": "4420", 00:19:24.916 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:24.916 "prchk_reftag": false, 00:19:24.916 "prchk_guard": false, 00:19:24.916 "hdgst": false, 00:19:24.916 "ddgst": false, 00:19:24.916 "dhchap_key": "key1", 00:19:24.916 "dhchap_ctrlr_key": "ckey1", 00:19:24.916 "allow_unrecognized_csi": false, 00:19:24.916 "method": "bdev_nvme_attach_controller", 00:19:24.916 "req_id": 1 00:19:24.916 } 00:19:24.916 Got JSON-RPC error response 00:19:24.916 response: 00:19:24.916 { 00:19:24.916 "code": -5, 00:19:24.916 "message": "Input/output error" 00:19:24.916 } 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3877602 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3877602 ']' 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3877602 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3877602 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3877602' 00:19:24.916 killing process with pid 3877602 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3877602 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3877602 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3903120 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3903120 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3903120 ']' 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.916 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3903120 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3903120 ']' 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.175 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.176 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.176 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.176 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.435 null0 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VgM 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Y0B ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Y0B 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.De2 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.zU4 ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zU4 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HT4 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.1Ol ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Ol 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.w9E 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:25.435 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.436 14:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.374 nvme0n1 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.374 { 00:19:26.374 "cntlid": 1, 00:19:26.374 "qid": 0, 00:19:26.374 "state": "enabled", 00:19:26.374 "thread": "nvmf_tgt_poll_group_000", 00:19:26.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:26.374 "listen_address": { 00:19:26.374 "trtype": "TCP", 00:19:26.374 "adrfam": "IPv4", 00:19:26.374 "traddr": "10.0.0.2", 00:19:26.374 "trsvcid": "4420" 00:19:26.374 }, 00:19:26.374 "peer_address": { 00:19:26.374 "trtype": "TCP", 00:19:26.374 "adrfam": "IPv4", 00:19:26.374 "traddr": "10.0.0.1", 00:19:26.374 "trsvcid": "50526" 00:19:26.374 }, 00:19:26.374 "auth": { 00:19:26.374 "state": "completed", 00:19:26.374 "digest": "sha512", 00:19:26.374 "dhgroup": "ffdhe8192" 00:19:26.374 } 00:19:26.374 } 00:19:26.374 ]' 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.374 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.634 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:26.634 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:27.203 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.463 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.463 request: 00:19:27.463 { 00:19:27.463 "name": "nvme0", 00:19:27.463 "trtype": "tcp", 00:19:27.463 "traddr": "10.0.0.2", 00:19:27.463 "adrfam": "ipv4", 00:19:27.463 "trsvcid": "4420", 00:19:27.463 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:27.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:27.463 "prchk_reftag": false, 00:19:27.463 "prchk_guard": false, 00:19:27.463 "hdgst": false, 00:19:27.463 "ddgst": false, 00:19:27.463 "dhchap_key": "key3", 00:19:27.463 "allow_unrecognized_csi": false, 00:19:27.463 "method": "bdev_nvme_attach_controller", 00:19:27.463 "req_id": 1 00:19:27.463 } 00:19:27.463 Got JSON-RPC error response 00:19:27.463 response: 00:19:27.463 { 00:19:27.463 "code": -5, 00:19:27.463 "message": "Input/output error" 00:19:27.463 } 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.722 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.981 request: 00:19:27.981 { 00:19:27.981 "name": "nvme0", 00:19:27.981 "trtype": "tcp", 00:19:27.981 "traddr": "10.0.0.2", 00:19:27.981 "adrfam": "ipv4", 00:19:27.981 "trsvcid": "4420", 00:19:27.981 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:27.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:27.981 "prchk_reftag": false, 00:19:27.981 "prchk_guard": false, 00:19:27.981 "hdgst": false, 00:19:27.981 "ddgst": false, 00:19:27.981 "dhchap_key": "key3", 00:19:27.981 "allow_unrecognized_csi": false, 00:19:27.981 "method": "bdev_nvme_attach_controller", 00:19:27.981 "req_id": 1 00:19:27.981 } 00:19:27.981 Got JSON-RPC error response 00:19:27.981 response: 00:19:27.981 { 00:19:27.981 "code": -5, 00:19:27.981 "message": "Input/output error" 00:19:27.981 } 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:27.981 14:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.981 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.550 request: 00:19:28.550 { 00:19:28.550 "name": "nvme0", 00:19:28.550 "trtype": "tcp", 00:19:28.550 "traddr": "10.0.0.2", 00:19:28.550 "adrfam": "ipv4", 00:19:28.550 "trsvcid": "4420", 00:19:28.550 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:28.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:28.550 "prchk_reftag": false, 00:19:28.550 "prchk_guard": false, 00:19:28.550 "hdgst": false, 00:19:28.550 "ddgst": false, 00:19:28.550 "dhchap_key": "key0", 00:19:28.550 "dhchap_ctrlr_key": "key1", 00:19:28.550 "allow_unrecognized_csi": false, 00:19:28.550 "method": "bdev_nvme_attach_controller", 00:19:28.550 "req_id": 1 00:19:28.550 } 00:19:28.550 Got JSON-RPC error response 00:19:28.550 response: 00:19:28.550 { 00:19:28.550 "code": -5, 00:19:28.550 "message": "Input/output error" 00:19:28.550 } 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:28.550 nvme0n1 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:28.550 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.809 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.809 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.809 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.068 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:19:29.068 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.068 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.068 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.068 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:29.068 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:29.068 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:29.666 nvme0n1 00:19:29.666 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:29.666 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.666 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:29.978 14:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: --dhchap-ctrl-secret DHHC-1:03:MjdmNTMyMTBjMzk2MmNhOGFkMWI5NTUyN2M1NzNlNjNlODdkNzMwNjVhNWY3NTkwZTc1ZDI1MGU0NjZiYWVlNoKFJ9U=: 00:19:30.543 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:30.543 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:30.543 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:30.544 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:30.544 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:30.544 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:30.544 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:30.544 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.544 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:30.802 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:31.061 request: 00:19:31.061 { 00:19:31.061 "name": "nvme0", 00:19:31.061 "trtype": "tcp", 00:19:31.061 "traddr": "10.0.0.2", 00:19:31.061 "adrfam": "ipv4", 00:19:31.061 "trsvcid": "4420", 00:19:31.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:31.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:31.061 "prchk_reftag": false, 00:19:31.061 "prchk_guard": false, 00:19:31.061 "hdgst": false, 00:19:31.061 "ddgst": false, 00:19:31.061 "dhchap_key": "key1", 00:19:31.061 "allow_unrecognized_csi": false, 00:19:31.061 "method": "bdev_nvme_attach_controller", 00:19:31.061 "req_id": 1 00:19:31.061 } 00:19:31.061 Got JSON-RPC error response 00:19:31.061 response: 00:19:31.061 { 00:19:31.061 "code": -5, 00:19:31.061 "message": "Input/output error" 00:19:31.061 } 00:19:31.061 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:31.061 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.061 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.061 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.061 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:31.061 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:31.061 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:31.998 nvme0n1 00:19:31.998 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:31.998 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.998 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:31.998 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.998 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.998 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.256 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:32.256 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.256 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.256 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.256 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:32.256 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:32.257 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:32.515 nvme0n1 00:19:32.515 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:32.515 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.515 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:32.515 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.515 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.515 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: '' 2s 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: ]] 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTQyYzE3NDQzN2Y2NTJlNjVjMDhjN2NkZTk1MmVmYTR5M0lH: 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:32.774 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: 2s 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: ]] 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MjYzNjk0ZTJlYmZjODcwMjdjNjFlMTc0ZDE2ZjlhOWUxM2M0MDM3YTQ1NGEwZTA3lt6ijw==: 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:34.678 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:37.209 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:37.468 nvme0n1 00:19:37.468 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:37.468 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.468 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.468 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.468 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:37.469 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:38.036 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:38.036 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.036 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.294 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:38.554 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:38.823 request: 00:19:38.823 { 00:19:38.823 "name": "nvme0", 00:19:38.823 "dhchap_key": "key1", 00:19:38.823 "dhchap_ctrlr_key": "key3", 00:19:38.823 "method": "bdev_nvme_set_keys", 00:19:38.823 "req_id": 1 00:19:38.823 } 00:19:38.823 Got JSON-RPC error response 00:19:38.823 response: 00:19:38.823 { 00:19:38.823 "code": -13, 00:19:38.823 "message": "Permission denied" 00:19:38.823 } 00:19:39.082 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:39.082 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.082 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.082 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.082 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:39.082 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:39.082 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.082 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:39.082 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:40.018 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:40.018 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:40.018 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.276 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:40.276 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:40.276 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.276 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.276 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.276 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:40.276 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:40.276 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:41.208 nvme0n1 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:41.208 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:41.466 request: 00:19:41.466 { 00:19:41.466 "name": "nvme0", 00:19:41.466 "dhchap_key": "key2", 00:19:41.466 "dhchap_ctrlr_key": "key0", 00:19:41.466 "method": "bdev_nvme_set_keys", 00:19:41.466 "req_id": 1 00:19:41.466 } 00:19:41.466 Got JSON-RPC error response 00:19:41.466 response: 00:19:41.466 { 00:19:41.466 "code": -13, 00:19:41.466 "message": "Permission denied" 00:19:41.466 } 00:19:41.466 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:41.466 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.466 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.466 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.466 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:41.466 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.466 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:41.725 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:41.725 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3877622 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3877622 ']' 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3877622 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:42.663 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3877622 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3877622' 00:19:42.922 killing process with pid 3877622 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3877622 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3877622 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.922 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.922 rmmod nvme_tcp 00:19:42.922 rmmod nvme_fabrics 00:19:43.182 rmmod nvme_keyring 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3903120 ']' 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3903120 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3903120 ']' 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3903120 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3903120 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3903120' 00:19:43.182 killing process with pid 3903120 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3903120 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3903120 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.182 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.VgM /tmp/spdk.key-sha256.De2 /tmp/spdk.key-sha384.HT4 /tmp/spdk.key-sha512.w9E /tmp/spdk.key-sha512.Y0B /tmp/spdk.key-sha384.zU4 /tmp/spdk.key-sha256.1Ol '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:45.722 00:19:45.722 real 2m15.294s 00:19:45.722 user 5m4.551s 00:19:45.722 sys 0m16.980s 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.722 ************************************ 00:19:45.722 END TEST nvmf_auth_target 00:19:45.722 ************************************ 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.722 ************************************ 00:19:45.722 START TEST nvmf_bdevio_no_huge 00:19:45.722 ************************************ 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:45.722 * Looking for test storage... 00:19:45.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:45.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.722 --rc genhtml_branch_coverage=1 00:19:45.722 --rc genhtml_function_coverage=1 00:19:45.722 --rc genhtml_legend=1 00:19:45.722 --rc geninfo_all_blocks=1 00:19:45.722 --rc geninfo_unexecuted_blocks=1 00:19:45.722 00:19:45.722 ' 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:45.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.722 --rc genhtml_branch_coverage=1 00:19:45.722 --rc genhtml_function_coverage=1 00:19:45.722 --rc genhtml_legend=1 00:19:45.722 --rc geninfo_all_blocks=1 00:19:45.722 --rc geninfo_unexecuted_blocks=1 00:19:45.722 00:19:45.722 ' 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:45.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.722 --rc genhtml_branch_coverage=1 00:19:45.722 --rc genhtml_function_coverage=1 00:19:45.722 --rc genhtml_legend=1 00:19:45.722 --rc geninfo_all_blocks=1 00:19:45.722 --rc geninfo_unexecuted_blocks=1 00:19:45.722 00:19:45.722 ' 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:45.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.722 --rc genhtml_branch_coverage=1 00:19:45.722 --rc genhtml_function_coverage=1 00:19:45.722 --rc genhtml_legend=1 00:19:45.722 --rc geninfo_all_blocks=1 00:19:45.722 --rc geninfo_unexecuted_blocks=1 00:19:45.722 00:19:45.722 ' 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.722 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.723 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:51.001 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:51.001 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:51.001 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:51.002 Found net devices under 0000:31:00.0: cvl_0_0 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:51.002 Found net devices under 0000:31:00.1: cvl_0_1 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:51.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:19:51.002 00:19:51.002 --- 10.0.0.2 ping statistics --- 00:19:51.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.002 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:19:51.002 00:19:51.002 --- 10.0.0.1 ping statistics --- 00:19:51.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.002 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3911598 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3911598 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3911598 ']' 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.002 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:51.002 [2024-11-20 14:39:57.735138] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:19:51.002 [2024-11-20 14:39:57.735196] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:51.002 [2024-11-20 14:39:57.829090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.002 [2024-11-20 14:39:57.886908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.002 [2024-11-20 14:39:57.886950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.002 [2024-11-20 14:39:57.886959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.002 [2024-11-20 14:39:57.886966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.002 [2024-11-20 14:39:57.886972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.002 [2024-11-20 14:39:57.888500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:51.002 [2024-11-20 14:39:57.888661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:51.002 [2024-11-20 14:39:57.888818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:51.002 [2024-11-20 14:39:57.888818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.573 [2024-11-20 14:39:58.565396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.573 Malloc0 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.573 [2024-11-20 14:39:58.603235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:51.573 { 00:19:51.573 "params": { 00:19:51.573 "name": "Nvme$subsystem", 00:19:51.573 "trtype": "$TEST_TRANSPORT", 00:19:51.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.573 "adrfam": "ipv4", 00:19:51.573 "trsvcid": "$NVMF_PORT", 00:19:51.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.573 "hdgst": ${hdgst:-false}, 00:19:51.573 "ddgst": ${ddgst:-false} 00:19:51.573 }, 00:19:51.573 "method": "bdev_nvme_attach_controller" 00:19:51.573 } 00:19:51.573 EOF 00:19:51.573 )") 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:51.573 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:51.573 "params": { 00:19:51.573 "name": "Nvme1", 00:19:51.573 "trtype": "tcp", 00:19:51.573 "traddr": "10.0.0.2", 00:19:51.573 "adrfam": "ipv4", 00:19:51.573 "trsvcid": "4420", 00:19:51.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.573 "hdgst": false, 00:19:51.573 "ddgst": false 00:19:51.573 }, 00:19:51.573 "method": "bdev_nvme_attach_controller" 00:19:51.573 }' 00:19:51.834 [2024-11-20 14:39:58.643362] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:19:51.834 [2024-11-20 14:39:58.643431] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3911637 ] 00:19:51.834 [2024-11-20 14:39:58.730825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:51.834 [2024-11-20 14:39:58.785048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.834 [2024-11-20 14:39:58.785200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.834 [2024-11-20 14:39:58.785201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.093 I/O targets: 00:19:52.093 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:52.093 00:19:52.093 00:19:52.093 CUnit - A unit testing framework for C - Version 2.1-3 00:19:52.093 http://cunit.sourceforge.net/ 00:19:52.093 00:19:52.093 00:19:52.093 Suite: bdevio tests on: Nvme1n1 00:19:52.093 Test: blockdev write read block ...passed 00:19:52.093 Test: blockdev write zeroes read block ...passed 00:19:52.093 Test: blockdev write zeroes read no split ...passed 00:19:52.093 Test: blockdev write zeroes read split ...passed 00:19:52.093 Test: blockdev write zeroes read split partial ...passed 00:19:52.093 Test: blockdev reset ...[2024-11-20 14:39:59.102638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:52.093 [2024-11-20 14:39:59.102710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c17fb0 (9): Bad file descriptor 00:19:52.093 [2024-11-20 14:39:59.117687] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:52.093 passed 00:19:52.093 Test: blockdev write read 8 blocks ...passed 00:19:52.353 Test: blockdev write read size > 128k ...passed 00:19:52.353 Test: blockdev write read invalid size ...passed 00:19:52.353 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:52.353 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:52.353 Test: blockdev write read max offset ...passed 00:19:52.353 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:52.353 Test: blockdev writev readv 8 blocks ...passed 00:19:52.353 Test: blockdev writev readv 30 x 1block ...passed 00:19:52.353 Test: blockdev writev readv block ...passed 00:19:52.353 Test: blockdev writev readv size > 128k ...passed 00:19:52.353 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:52.353 Test: blockdev comparev and writev ...[2024-11-20 14:39:59.338482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.353 [2024-11-20 14:39:59.338513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:52.353 [2024-11-20 14:39:59.338529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.353 [2024-11-20 14:39:59.338538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:52.353 [2024-11-20 14:39:59.338880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.353 [2024-11-20 14:39:59.338892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:52.353 [2024-11-20 14:39:59.338906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.353 [2024-11-20 14:39:59.338916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:52.353 [2024-11-20 14:39:59.339253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.353 [2024-11-20 14:39:59.339267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:52.353 [2024-11-20 14:39:59.339285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.353 [2024-11-20 14:39:59.339294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:52.353 [2024-11-20 14:39:59.339635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.353 [2024-11-20 14:39:59.339647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:52.353 [2024-11-20 14:39:59.339661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.353 [2024-11-20 14:39:59.339668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:52.353 passed 00:19:52.613 Test: blockdev nvme passthru rw ...passed 00:19:52.613 Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:39:59.421816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.613 [2024-11-20 14:39:59.421829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:52.613 [2024-11-20 14:39:59.422053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.613 [2024-11-20 14:39:59.422064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:52.613 [2024-11-20 14:39:59.422298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.613 [2024-11-20 14:39:59.422309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:52.613 [2024-11-20 14:39:59.422528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.613 [2024-11-20 14:39:59.422538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:52.613 passed 00:19:52.613 Test: blockdev nvme admin passthru ...passed 00:19:52.613 Test: blockdev copy ...passed 00:19:52.613 00:19:52.613 Run Summary: Type Total Ran Passed Failed Inactive 00:19:52.613 suites 1 1 n/a 0 0 00:19:52.613 tests 23 23 23 0 0 00:19:52.613 asserts 152 152 152 0 n/a 00:19:52.613 00:19:52.613 Elapsed time = 0.969 seconds 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.874 rmmod nvme_tcp 00:19:52.874 rmmod nvme_fabrics 00:19:52.874 rmmod nvme_keyring 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3911598 ']' 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3911598 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3911598 ']' 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3911598 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3911598 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3911598' 00:19:52.874 killing process with pid 3911598 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3911598 00:19:52.874 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3911598 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.134 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.067 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:55.067 00:19:55.067 real 0m9.854s 00:19:55.067 user 0m11.924s 00:19:55.067 sys 0m4.817s 00:19:55.067 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.067 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.067 ************************************ 00:19:55.067 END TEST nvmf_bdevio_no_huge 00:19:55.067 ************************************ 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:55.328 ************************************ 00:19:55.328 START TEST nvmf_tls 00:19:55.328 ************************************ 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:55.328 * Looking for test storage... 00:19:55.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:55.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.328 --rc genhtml_branch_coverage=1 00:19:55.328 --rc genhtml_function_coverage=1 00:19:55.328 --rc genhtml_legend=1 00:19:55.328 --rc geninfo_all_blocks=1 00:19:55.328 --rc geninfo_unexecuted_blocks=1 00:19:55.328 00:19:55.328 ' 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:55.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.328 --rc genhtml_branch_coverage=1 00:19:55.328 --rc genhtml_function_coverage=1 00:19:55.328 --rc genhtml_legend=1 00:19:55.328 --rc geninfo_all_blocks=1 00:19:55.328 --rc geninfo_unexecuted_blocks=1 00:19:55.328 00:19:55.328 ' 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:55.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.328 --rc genhtml_branch_coverage=1 00:19:55.328 --rc genhtml_function_coverage=1 00:19:55.328 --rc genhtml_legend=1 00:19:55.328 --rc geninfo_all_blocks=1 00:19:55.328 --rc geninfo_unexecuted_blocks=1 00:19:55.328 00:19:55.328 ' 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:55.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.328 --rc genhtml_branch_coverage=1 00:19:55.328 --rc genhtml_function_coverage=1 00:19:55.328 --rc genhtml_legend=1 00:19:55.328 --rc geninfo_all_blocks=1 00:19:55.328 --rc geninfo_unexecuted_blocks=1 00:19:55.328 00:19:55.328 ' 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.328 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:55.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:55.329 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:00.608 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:00.608 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:00.608 Found net devices under 0000:31:00.0: cvl_0_0 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:00.608 Found net devices under 0000:31:00.1: cvl_0_1 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:00.608 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:00.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:20:00.868 00:20:00.868 --- 10.0.0.2 ping statistics --- 00:20:00.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.868 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:20:00.868 00:20:00.868 --- 10.0.0.1 ping statistics --- 00:20:00.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.868 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3916317 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3916317 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3916317 ']' 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.868 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.868 [2024-11-20 14:40:07.766538] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:00.868 [2024-11-20 14:40:07.766587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.868 [2024-11-20 14:40:07.838505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.868 [2024-11-20 14:40:07.867977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.868 [2024-11-20 14:40:07.868005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.868 [2024-11-20 14:40:07.868011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.868 [2024-11-20 14:40:07.868016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.869 [2024-11-20 14:40:07.868020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.869 [2024-11-20 14:40:07.868503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.869 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.869 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.869 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.869 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.869 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.128 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.128 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:01.128 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:01.128 true 00:20:01.128 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.128 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:01.387 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:01.387 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:01.387 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:01.387 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.387 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:01.645 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:01.645 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:01.645 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:01.905 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.905 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:01.905 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:01.905 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:01.905 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:01.905 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.163 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:02.163 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:02.163 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:02.421 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.421 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:02.421 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:02.421 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:02.421 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:02.680 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.OR89yjwgAQ 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.SWJ3bV01aC 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.OR89yjwgAQ 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.SWJ3bV01aC 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:02.940 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:03.208 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.OR89yjwgAQ 00:20:03.208 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OR89yjwgAQ 00:20:03.208 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:03.521 [2024-11-20 14:40:10.313071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.521 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.521 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.780 [2024-11-20 14:40:10.625828] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.780 [2024-11-20 14:40:10.626034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.780 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.780 malloc0 00:20:03.780 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:04.039 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OR89yjwgAQ 00:20:04.297 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:04.297 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.OR89yjwgAQ 00:20:16.504 Initializing NVMe Controllers 00:20:16.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:16.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:16.504 Initialization complete. Launching workers. 00:20:16.504 ======================================================== 00:20:16.504 Latency(us) 00:20:16.504 Device Information : IOPS MiB/s Average min max 00:20:16.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18503.10 72.28 3459.12 1034.51 4055.57 00:20:16.504 ======================================================== 00:20:16.504 Total : 18503.10 72.28 3459.12 1034.51 4055.57 00:20:16.504 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OR89yjwgAQ 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OR89yjwgAQ 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3919369 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3919369 /var/tmp/bdevperf.sock 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3919369 ']' 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.505 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:16.505 [2024-11-20 14:40:21.398265] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:16.505 [2024-11-20 14:40:21.398320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919369 ] 00:20:16.505 [2024-11-20 14:40:21.476119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.505 [2024-11-20 14:40:21.511161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.505 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.505 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.505 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OR89yjwgAQ 00:20:16.505 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:16.505 [2024-11-20 14:40:22.463956] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.505 TLSTESTn1 00:20:16.505 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:16.505 Running I/O for 10 seconds... 00:20:17.700 3072.00 IOPS, 12.00 MiB/s [2024-11-20T13:40:25.695Z] 3361.50 IOPS, 13.13 MiB/s [2024-11-20T13:40:26.633Z] 3707.67 IOPS, 14.48 MiB/s [2024-11-20T13:40:28.015Z] 3787.50 IOPS, 14.79 MiB/s [2024-11-20T13:40:28.950Z] 3898.00 IOPS, 15.23 MiB/s [2024-11-20T13:40:29.885Z] 4019.67 IOPS, 15.70 MiB/s [2024-11-20T13:40:30.823Z] 4162.29 IOPS, 16.26 MiB/s [2024-11-20T13:40:31.762Z] 4176.38 IOPS, 16.31 MiB/s [2024-11-20T13:40:32.701Z] 4181.44 IOPS, 16.33 MiB/s [2024-11-20T13:40:32.701Z] 4229.90 IOPS, 16.52 MiB/s 00:20:25.641 Latency(us) 00:20:25.641 [2024-11-20T13:40:32.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.641 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.641 Verification LBA range: start 0x0 length 0x2000 00:20:25.641 TLSTESTn1 : 10.01 4236.76 16.55 0.00 0.00 30173.46 4614.83 81264.64 00:20:25.641 [2024-11-20T13:40:32.701Z] =================================================================================================================== 00:20:25.641 [2024-11-20T13:40:32.701Z] Total : 4236.76 16.55 0.00 0.00 30173.46 4614.83 81264.64 00:20:25.641 { 00:20:25.641 "results": [ 00:20:25.641 { 00:20:25.641 "job": "TLSTESTn1", 00:20:25.641 "core_mask": "0x4", 00:20:25.641 "workload": "verify", 00:20:25.641 "status": "finished", 00:20:25.641 "verify_range": { 00:20:25.641 "start": 0, 00:20:25.641 "length": 8192 00:20:25.641 }, 00:20:25.641 "queue_depth": 128, 00:20:25.641 "io_size": 4096, 00:20:25.641 "runtime": 10.013777, 00:20:25.641 "iops": 4236.7630115989205, 00:20:25.641 "mibps": 16.549855514058283, 00:20:25.641 "io_failed": 0, 00:20:25.641 "io_timeout": 0, 00:20:25.641 "avg_latency_us": 30173.461179465423, 00:20:25.641 "min_latency_us": 4614.826666666667, 00:20:25.641 "max_latency_us": 81264.64 00:20:25.641 } 00:20:25.641 ], 00:20:25.641 "core_count": 1 00:20:25.641 } 00:20:25.641 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:25.641 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3919369 00:20:25.641 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3919369 ']' 00:20:25.641 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3919369 00:20:25.641 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:25.641 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.641 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3919369 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3919369' 00:20:25.901 killing process with pid 3919369 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3919369 00:20:25.901 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.901 00:20:25.901 Latency(us) 00:20:25.901 [2024-11-20T13:40:32.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.901 [2024-11-20T13:40:32.961Z] =================================================================================================================== 00:20:25.901 [2024-11-20T13:40:32.961Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3919369 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SWJ3bV01aC 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SWJ3bV01aC 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SWJ3bV01aC 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SWJ3bV01aC 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3921715 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3921715 /var/tmp/bdevperf.sock 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3921715 ']' 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.901 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.901 [2024-11-20 14:40:32.856024] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:25.901 [2024-11-20 14:40:32.856079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921715 ] 00:20:25.901 [2024-11-20 14:40:32.920874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.901 [2024-11-20 14:40:32.949601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.161 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.161 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:26.161 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SWJ3bV01aC 00:20:26.161 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.421 [2024-11-20 14:40:33.311790] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.421 [2024-11-20 14:40:33.316355] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:26.421 [2024-11-20 14:40:33.316983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196a960 (107): Transport endpoint is not connected 00:20:26.421 [2024-11-20 14:40:33.317977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196a960 (9): Bad file descriptor 00:20:26.421 [2024-11-20 14:40:33.318979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:26.421 [2024-11-20 14:40:33.318991] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:26.421 [2024-11-20 14:40:33.318996] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:26.421 [2024-11-20 14:40:33.319005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:26.421 request: 00:20:26.421 { 00:20:26.421 "name": "TLSTEST", 00:20:26.421 "trtype": "tcp", 00:20:26.421 "traddr": "10.0.0.2", 00:20:26.421 "adrfam": "ipv4", 00:20:26.421 "trsvcid": "4420", 00:20:26.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.421 "prchk_reftag": false, 00:20:26.421 "prchk_guard": false, 00:20:26.421 "hdgst": false, 00:20:26.421 "ddgst": false, 00:20:26.421 "psk": "key0", 00:20:26.421 "allow_unrecognized_csi": false, 00:20:26.421 "method": "bdev_nvme_attach_controller", 00:20:26.421 "req_id": 1 00:20:26.421 } 00:20:26.421 Got JSON-RPC error response 00:20:26.421 response: 00:20:26.421 { 00:20:26.421 "code": -5, 00:20:26.421 "message": "Input/output error" 00:20:26.421 } 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3921715 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3921715 ']' 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3921715 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3921715 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3921715' 00:20:26.421 killing process with pid 3921715 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3921715 00:20:26.421 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.421 00:20:26.421 Latency(us) 00:20:26.421 [2024-11-20T13:40:33.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.421 [2024-11-20T13:40:33.481Z] =================================================================================================================== 00:20:26.421 [2024-11-20T13:40:33.481Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3921715 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OR89yjwgAQ 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OR89yjwgAQ 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OR89yjwgAQ 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OR89yjwgAQ 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3922049 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3922049 /var/tmp/bdevperf.sock 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3922049 ']' 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.421 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.681 [2024-11-20 14:40:33.495868] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:26.681 [2024-11-20 14:40:33.495916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922049 ] 00:20:26.681 [2024-11-20 14:40:33.551805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.681 [2024-11-20 14:40:33.580003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.681 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.681 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:26.681 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OR89yjwgAQ 00:20:26.939 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:26.939 [2024-11-20 14:40:33.941969] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.939 [2024-11-20 14:40:33.946300] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:26.939 [2024-11-20 14:40:33.946320] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:26.939 [2024-11-20 14:40:33.946339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:26.939 [2024-11-20 14:40:33.947028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222a960 (107): Transport endpoint is not connected 00:20:26.939 [2024-11-20 14:40:33.948023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222a960 (9): Bad file descriptor 00:20:26.939 [2024-11-20 14:40:33.949025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:26.939 [2024-11-20 14:40:33.949033] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:26.939 [2024-11-20 14:40:33.949039] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:26.940 [2024-11-20 14:40:33.949049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:26.940 request: 00:20:26.940 { 00:20:26.940 "name": "TLSTEST", 00:20:26.940 "trtype": "tcp", 00:20:26.940 "traddr": "10.0.0.2", 00:20:26.940 "adrfam": "ipv4", 00:20:26.940 "trsvcid": "4420", 00:20:26.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.940 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:26.940 "prchk_reftag": false, 00:20:26.940 "prchk_guard": false, 00:20:26.940 "hdgst": false, 00:20:26.940 "ddgst": false, 00:20:26.940 "psk": "key0", 00:20:26.940 "allow_unrecognized_csi": false, 00:20:26.940 "method": "bdev_nvme_attach_controller", 00:20:26.940 "req_id": 1 00:20:26.940 } 00:20:26.940 Got JSON-RPC error response 00:20:26.940 response: 00:20:26.940 { 00:20:26.940 "code": -5, 00:20:26.940 "message": "Input/output error" 00:20:26.940 } 00:20:26.940 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3922049 00:20:26.940 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3922049 ']' 00:20:26.940 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3922049 00:20:26.940 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:26.940 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.940 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3922049 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3922049' 00:20:27.199 killing process with pid 3922049 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3922049 00:20:27.199 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.199 00:20:27.199 Latency(us) 00:20:27.199 [2024-11-20T13:40:34.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.199 [2024-11-20T13:40:34.259Z] =================================================================================================================== 00:20:27.199 [2024-11-20T13:40:34.259Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3922049 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OR89yjwgAQ 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OR89yjwgAQ 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OR89yjwgAQ 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OR89yjwgAQ 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3922065 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3922065 /var/tmp/bdevperf.sock 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3922065 ']' 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.199 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.199 [2024-11-20 14:40:34.136523] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:27.199 [2024-11-20 14:40:34.136575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922065 ] 00:20:27.199 [2024-11-20 14:40:34.201855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.199 [2024-11-20 14:40:34.229045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.459 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.459 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.459 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OR89yjwgAQ 00:20:27.459 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:27.720 [2024-11-20 14:40:34.598949] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.720 [2024-11-20 14:40:34.606729] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:27.721 [2024-11-20 14:40:34.606749] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:27.721 [2024-11-20 14:40:34.606768] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:27.721 [2024-11-20 14:40:34.607086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a4960 (107): Transport endpoint is not connected 00:20:27.721 [2024-11-20 14:40:34.608082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a4960 (9): Bad file descriptor 00:20:27.721 [2024-11-20 14:40:34.609084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:27.721 [2024-11-20 14:40:34.609094] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:27.721 [2024-11-20 14:40:34.609100] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:27.721 [2024-11-20 14:40:34.609107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:27.721 request: 00:20:27.721 { 00:20:27.721 "name": "TLSTEST", 00:20:27.721 "trtype": "tcp", 00:20:27.721 "traddr": "10.0.0.2", 00:20:27.721 "adrfam": "ipv4", 00:20:27.721 "trsvcid": "4420", 00:20:27.721 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:27.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.721 "prchk_reftag": false, 00:20:27.721 "prchk_guard": false, 00:20:27.721 "hdgst": false, 00:20:27.721 "ddgst": false, 00:20:27.721 "psk": "key0", 00:20:27.721 "allow_unrecognized_csi": false, 00:20:27.721 "method": "bdev_nvme_attach_controller", 00:20:27.721 "req_id": 1 00:20:27.721 } 00:20:27.721 Got JSON-RPC error response 00:20:27.721 response: 00:20:27.721 { 00:20:27.721 "code": -5, 00:20:27.721 "message": "Input/output error" 00:20:27.721 } 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3922065 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3922065 ']' 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3922065 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3922065 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3922065' 00:20:27.721 killing process with pid 3922065 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3922065 00:20:27.721 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.721 00:20:27.721 Latency(us) 00:20:27.721 [2024-11-20T13:40:34.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.721 [2024-11-20T13:40:34.781Z] =================================================================================================================== 00:20:27.721 [2024-11-20T13:40:34.781Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3922065 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3922306 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3922306 /var/tmp/bdevperf.sock 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3922306 ']' 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.721 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.981 [2024-11-20 14:40:34.802978] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:27.981 [2024-11-20 14:40:34.803034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922306 ] 00:20:27.981 [2024-11-20 14:40:34.868189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.981 [2024-11-20 14:40:34.896303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.981 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.981 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.981 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:28.241 [2024-11-20 14:40:35.105822] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:28.241 [2024-11-20 14:40:35.105851] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:28.241 request: 00:20:28.241 { 00:20:28.241 "name": "key0", 00:20:28.241 "path": "", 00:20:28.241 "method": "keyring_file_add_key", 00:20:28.241 "req_id": 1 00:20:28.241 } 00:20:28.241 Got JSON-RPC error response 00:20:28.241 response: 00:20:28.241 { 00:20:28.241 "code": -1, 00:20:28.241 "message": "Operation not permitted" 00:20:28.241 } 00:20:28.241 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.241 [2024-11-20 14:40:35.262296] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.241 [2024-11-20 14:40:35.262319] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:28.241 request: 00:20:28.241 { 00:20:28.241 "name": "TLSTEST", 00:20:28.241 "trtype": "tcp", 00:20:28.241 "traddr": "10.0.0.2", 00:20:28.241 "adrfam": "ipv4", 00:20:28.241 "trsvcid": "4420", 00:20:28.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.241 "prchk_reftag": false, 00:20:28.241 "prchk_guard": false, 00:20:28.241 "hdgst": false, 00:20:28.241 "ddgst": false, 00:20:28.241 "psk": "key0", 00:20:28.241 "allow_unrecognized_csi": false, 00:20:28.241 "method": "bdev_nvme_attach_controller", 00:20:28.241 "req_id": 1 00:20:28.241 } 00:20:28.241 Got JSON-RPC error response 00:20:28.241 response: 00:20:28.241 { 00:20:28.241 "code": -126, 00:20:28.241 "message": "Required key not available" 00:20:28.241 } 00:20:28.241 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3922306 00:20:28.241 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3922306 ']' 00:20:28.241 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3922306 00:20:28.242 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.242 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.242 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3922306 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3922306' 00:20:28.501 killing process with pid 3922306 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3922306 00:20:28.501 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.501 00:20:28.501 Latency(us) 00:20:28.501 [2024-11-20T13:40:35.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.501 [2024-11-20T13:40:35.561Z] =================================================================================================================== 00:20:28.501 [2024-11-20T13:40:35.561Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3922306 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3916317 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3916317 ']' 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3916317 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3916317 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3916317' 00:20:28.501 killing process with pid 3916317 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3916317 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3916317 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:28.501 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:28.502 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:28.502 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:28.502 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Zsr9vr0dJz 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Zsr9vr0dJz 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3922428 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3922428 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3922428 ']' 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:28.762 [2024-11-20 14:40:35.638625] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:28.762 [2024-11-20 14:40:35.638675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.762 [2024-11-20 14:40:35.710812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.762 [2024-11-20 14:40:35.737970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.762 [2024-11-20 14:40:35.737998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.762 [2024-11-20 14:40:35.738004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.762 [2024-11-20 14:40:35.738008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.762 [2024-11-20 14:40:35.738012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.762 [2024-11-20 14:40:35.738499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.762 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.020 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.020 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Zsr9vr0dJz 00:20:29.020 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Zsr9vr0dJz 00:20:29.020 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.020 [2024-11-20 14:40:35.977781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.020 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:29.279 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:29.279 [2024-11-20 14:40:36.286537] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.279 [2024-11-20 14:40:36.286733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.279 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:29.538 malloc0 00:20:29.538 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:29.796 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Zsr9vr0dJz 00:20:29.796 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zsr9vr0dJz 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Zsr9vr0dJz 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3922784 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3922784 /var/tmp/bdevperf.sock 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3922784 ']' 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.055 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.056 [2024-11-20 14:40:36.950950] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:30.056 [2024-11-20 14:40:36.950993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922784 ] 00:20:30.056 [2024-11-20 14:40:37.006420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.056 [2024-11-20 14:40:37.035088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.056 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.056 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:30.056 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Zsr9vr0dJz 00:20:30.314 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:30.574 [2024-11-20 14:40:37.397068] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.574 TLSTESTn1 00:20:30.574 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:30.574 Running I/O for 10 seconds... 00:20:32.887 4042.00 IOPS, 15.79 MiB/s [2024-11-20T13:40:40.886Z] 4111.50 IOPS, 16.06 MiB/s [2024-11-20T13:40:41.823Z] 4204.00 IOPS, 16.42 MiB/s [2024-11-20T13:40:42.758Z] 4470.25 IOPS, 17.46 MiB/s [2024-11-20T13:40:43.696Z] 4426.80 IOPS, 17.29 MiB/s [2024-11-20T13:40:44.634Z] 4358.00 IOPS, 17.02 MiB/s [2024-11-20T13:40:46.014Z] 4440.57 IOPS, 17.35 MiB/s [2024-11-20T13:40:46.583Z] 4498.75 IOPS, 17.57 MiB/s [2024-11-20T13:40:47.686Z] 4467.33 IOPS, 17.45 MiB/s [2024-11-20T13:40:47.686Z] 4435.60 IOPS, 17.33 MiB/s 00:20:40.626 Latency(us) 00:20:40.626 [2024-11-20T13:40:47.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.626 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:40.626 Verification LBA range: start 0x0 length 0x2000 00:20:40.626 TLSTESTn1 : 10.06 4422.31 17.27 0.00 0.00 28849.91 5188.27 58108.59 00:20:40.626 [2024-11-20T13:40:47.686Z] =================================================================================================================== 00:20:40.626 [2024-11-20T13:40:47.686Z] Total : 4422.31 17.27 0.00 0.00 28849.91 5188.27 58108.59 00:20:40.626 { 00:20:40.626 "results": [ 00:20:40.626 { 00:20:40.626 "job": "TLSTESTn1", 00:20:40.626 "core_mask": "0x4", 00:20:40.626 "workload": "verify", 00:20:40.626 "status": "finished", 00:20:40.626 "verify_range": { 00:20:40.626 "start": 0, 00:20:40.626 "length": 8192 00:20:40.626 }, 00:20:40.626 "queue_depth": 128, 00:20:40.626 "io_size": 4096, 00:20:40.626 "runtime": 10.058759, 00:20:40.626 "iops": 4422.314919762965, 00:20:40.626 "mibps": 17.27466765532408, 00:20:40.626 "io_failed": 0, 00:20:40.626 "io_timeout": 0, 00:20:40.626 "avg_latency_us": 28849.910456279173, 00:20:40.626 "min_latency_us": 5188.266666666666, 00:20:40.626 "max_latency_us": 58108.58666666667 00:20:40.626 } 00:20:40.626 ], 00:20:40.626 "core_count": 1 00:20:40.626 } 00:20:40.626 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.626 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3922784 00:20:40.626 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3922784 ']' 00:20:40.626 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3922784 00:20:40.626 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:40.626 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.626 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3922784 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3922784' 00:20:40.887 killing process with pid 3922784 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3922784 00:20:40.887 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.887 00:20:40.887 Latency(us) 00:20:40.887 [2024-11-20T13:40:47.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.887 [2024-11-20T13:40:47.947Z] =================================================================================================================== 00:20:40.887 [2024-11-20T13:40:47.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3922784 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Zsr9vr0dJz 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zsr9vr0dJz 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zsr9vr0dJz 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zsr9vr0dJz 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Zsr9vr0dJz 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3925123 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3925123 /var/tmp/bdevperf.sock 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3925123 ']' 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.887 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.887 [2024-11-20 14:40:47.838009] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:40.887 [2024-11-20 14:40:47.838063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925123 ] 00:20:40.887 [2024-11-20 14:40:47.903440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.887 [2024-11-20 14:40:47.930978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.147 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.147 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:41.147 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Zsr9vr0dJz 00:20:41.147 [2024-11-20 14:40:48.140563] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Zsr9vr0dJz': 0100666 00:20:41.147 [2024-11-20 14:40:48.140591] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:41.147 request: 00:20:41.147 { 00:20:41.147 "name": "key0", 00:20:41.147 "path": "/tmp/tmp.Zsr9vr0dJz", 00:20:41.147 "method": "keyring_file_add_key", 00:20:41.147 "req_id": 1 00:20:41.147 } 00:20:41.147 Got JSON-RPC error response 00:20:41.147 response: 00:20:41.147 { 00:20:41.147 "code": -1, 00:20:41.147 "message": "Operation not permitted" 00:20:41.147 } 00:20:41.147 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:41.407 [2024-11-20 14:40:48.301031] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.407 [2024-11-20 14:40:48.301055] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:41.407 request: 00:20:41.407 { 00:20:41.407 "name": "TLSTEST", 00:20:41.407 "trtype": "tcp", 00:20:41.407 "traddr": "10.0.0.2", 00:20:41.407 "adrfam": "ipv4", 00:20:41.407 "trsvcid": "4420", 00:20:41.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.407 "prchk_reftag": false, 00:20:41.407 "prchk_guard": false, 00:20:41.407 "hdgst": false, 00:20:41.407 "ddgst": false, 00:20:41.407 "psk": "key0", 00:20:41.407 "allow_unrecognized_csi": false, 00:20:41.407 "method": "bdev_nvme_attach_controller", 00:20:41.407 "req_id": 1 00:20:41.407 } 00:20:41.407 Got JSON-RPC error response 00:20:41.407 response: 00:20:41.407 { 00:20:41.407 "code": -126, 00:20:41.407 "message": "Required key not available" 00:20:41.407 } 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3925123 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3925123 ']' 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3925123 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3925123 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3925123' 00:20:41.407 killing process with pid 3925123 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3925123 00:20:41.407 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.407 00:20:41.407 Latency(us) 00:20:41.407 [2024-11-20T13:40:48.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.407 [2024-11-20T13:40:48.467Z] =================================================================================================================== 00:20:41.407 [2024-11-20T13:40:48.467Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3925123 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3922428 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3922428 ']' 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3922428 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.407 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3922428 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3922428' 00:20:41.667 killing process with pid 3922428 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3922428 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3922428 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3925467 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3925467 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3925467 ']' 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:41.667 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.668 [2024-11-20 14:40:48.659070] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:41.668 [2024-11-20 14:40:48.659122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.927 [2024-11-20 14:40:48.729640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.927 [2024-11-20 14:40:48.757636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.927 [2024-11-20 14:40:48.757664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.927 [2024-11-20 14:40:48.757670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.927 [2024-11-20 14:40:48.757675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.927 [2024-11-20 14:40:48.757682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.927 [2024-11-20 14:40:48.758142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Zsr9vr0dJz 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Zsr9vr0dJz 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Zsr9vr0dJz 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Zsr9vr0dJz 00:20:41.927 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.186 [2024-11-20 14:40:48.997463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.186 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:42.186 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:42.447 [2024-11-20 14:40:49.302214] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.447 [2024-11-20 14:40:49.302407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.447 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:42.447 malloc0 00:20:42.447 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:42.705 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Zsr9vr0dJz 00:20:42.965 [2024-11-20 14:40:49.769328] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Zsr9vr0dJz': 0100666 00:20:42.965 [2024-11-20 14:40:49.769352] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:42.965 request: 00:20:42.965 { 00:20:42.965 "name": "key0", 00:20:42.965 "path": "/tmp/tmp.Zsr9vr0dJz", 00:20:42.965 "method": "keyring_file_add_key", 00:20:42.965 "req_id": 1 00:20:42.965 } 00:20:42.965 Got JSON-RPC error response 00:20:42.965 response: 00:20:42.965 { 00:20:42.965 "code": -1, 00:20:42.965 "message": "Operation not permitted" 00:20:42.965 } 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:42.965 [2024-11-20 14:40:49.925724] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:42.965 [2024-11-20 14:40:49.925753] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:42.965 request: 00:20:42.965 { 00:20:42.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.965 "host": "nqn.2016-06.io.spdk:host1", 00:20:42.965 "psk": "key0", 00:20:42.965 "method": "nvmf_subsystem_add_host", 00:20:42.965 "req_id": 1 00:20:42.965 } 00:20:42.965 Got JSON-RPC error response 00:20:42.965 response: 00:20:42.965 { 00:20:42.965 "code": -32603, 00:20:42.965 "message": "Internal error" 00:20:42.965 } 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3925467 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3925467 ']' 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3925467 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3925467 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3925467' 00:20:42.965 killing process with pid 3925467 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3925467 00:20:42.965 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3925467 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Zsr9vr0dJz 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3925806 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3925806 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3925806 ']' 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.225 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.225 [2024-11-20 14:40:50.133339] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:43.225 [2024-11-20 14:40:50.133398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.225 [2024-11-20 14:40:50.202638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.225 [2024-11-20 14:40:50.231229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.225 [2024-11-20 14:40:50.231265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.225 [2024-11-20 14:40:50.231271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.225 [2024-11-20 14:40:50.231276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.225 [2024-11-20 14:40:50.231280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.225 [2024-11-20 14:40:50.231736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.485 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.485 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:43.485 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.485 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.485 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.485 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.485 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Zsr9vr0dJz 00:20:43.485 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Zsr9vr0dJz 00:20:43.486 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:43.486 [2024-11-20 14:40:50.467086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.486 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:43.745 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:43.745 [2024-11-20 14:40:50.775841] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.745 [2024-11-20 14:40:50.776052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.745 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:44.004 malloc0 00:20:44.004 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:44.263 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Zsr9vr0dJz 00:20:44.263 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3926036 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3926036 /var/tmp/bdevperf.sock 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3926036 ']' 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.523 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.523 [2024-11-20 14:40:51.447722] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:44.523 [2024-11-20 14:40:51.447774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926036 ] 00:20:44.523 [2024-11-20 14:40:51.512870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.523 [2024-11-20 14:40:51.541986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.786 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.786 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:44.786 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Zsr9vr0dJz 00:20:44.786 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:45.053 [2024-11-20 14:40:51.908151] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.053 TLSTESTn1 00:20:45.053 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:45.311 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:45.311 "subsystems": [ 00:20:45.311 { 00:20:45.311 "subsystem": "keyring", 00:20:45.311 "config": [ 00:20:45.311 { 00:20:45.311 "method": "keyring_file_add_key", 00:20:45.311 "params": { 00:20:45.311 "name": "key0", 00:20:45.311 "path": "/tmp/tmp.Zsr9vr0dJz" 00:20:45.311 } 00:20:45.311 } 00:20:45.311 ] 00:20:45.311 }, 00:20:45.311 { 00:20:45.311 "subsystem": "iobuf", 00:20:45.311 "config": [ 00:20:45.311 { 00:20:45.311 "method": "iobuf_set_options", 00:20:45.311 "params": { 00:20:45.311 "small_pool_count": 8192, 00:20:45.311 "large_pool_count": 1024, 00:20:45.311 "small_bufsize": 8192, 00:20:45.311 "large_bufsize": 135168, 00:20:45.311 "enable_numa": false 00:20:45.311 } 00:20:45.311 } 00:20:45.311 ] 00:20:45.311 }, 00:20:45.311 { 00:20:45.311 "subsystem": "sock", 00:20:45.311 "config": [ 00:20:45.311 { 00:20:45.312 "method": "sock_set_default_impl", 00:20:45.312 "params": { 00:20:45.312 "impl_name": "posix" 00:20:45.312 } 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "method": "sock_impl_set_options", 00:20:45.312 "params": { 00:20:45.312 "impl_name": "ssl", 00:20:45.312 "recv_buf_size": 4096, 00:20:45.312 "send_buf_size": 4096, 00:20:45.312 "enable_recv_pipe": true, 00:20:45.312 "enable_quickack": false, 00:20:45.312 "enable_placement_id": 0, 00:20:45.312 "enable_zerocopy_send_server": true, 00:20:45.312 "enable_zerocopy_send_client": false, 00:20:45.312 "zerocopy_threshold": 0, 00:20:45.312 "tls_version": 0, 00:20:45.312 "enable_ktls": false 00:20:45.312 } 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "method": "sock_impl_set_options", 00:20:45.312 "params": { 00:20:45.312 "impl_name": "posix", 00:20:45.312 "recv_buf_size": 2097152, 00:20:45.312 "send_buf_size": 2097152, 00:20:45.312 "enable_recv_pipe": true, 00:20:45.312 "enable_quickack": false, 00:20:45.312 "enable_placement_id": 0, 00:20:45.312 "enable_zerocopy_send_server": true, 00:20:45.312 "enable_zerocopy_send_client": false, 00:20:45.312 "zerocopy_threshold": 0, 00:20:45.312 "tls_version": 0, 00:20:45.312 "enable_ktls": false 00:20:45.312 } 00:20:45.312 } 00:20:45.312 ] 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "subsystem": "vmd", 00:20:45.312 "config": [] 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "subsystem": "accel", 00:20:45.312 "config": [ 00:20:45.312 { 00:20:45.312 "method": "accel_set_options", 00:20:45.312 "params": { 00:20:45.312 "small_cache_size": 128, 00:20:45.312 "large_cache_size": 16, 00:20:45.312 "task_count": 2048, 00:20:45.312 "sequence_count": 2048, 00:20:45.312 "buf_count": 2048 00:20:45.312 } 00:20:45.312 } 00:20:45.312 ] 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "subsystem": "bdev", 00:20:45.312 "config": [ 00:20:45.312 { 00:20:45.312 "method": "bdev_set_options", 00:20:45.312 "params": { 00:20:45.312 "bdev_io_pool_size": 65535, 00:20:45.312 "bdev_io_cache_size": 256, 00:20:45.312 "bdev_auto_examine": true, 00:20:45.312 "iobuf_small_cache_size": 128, 00:20:45.312 "iobuf_large_cache_size": 16 00:20:45.312 } 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "method": "bdev_raid_set_options", 00:20:45.312 "params": { 00:20:45.312 "process_window_size_kb": 1024, 00:20:45.312 "process_max_bandwidth_mb_sec": 0 00:20:45.312 } 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "method": "bdev_iscsi_set_options", 00:20:45.312 "params": { 00:20:45.312 "timeout_sec": 30 00:20:45.312 } 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "method": "bdev_nvme_set_options", 00:20:45.312 "params": { 00:20:45.312 "action_on_timeout": "none", 00:20:45.312 "timeout_us": 0, 00:20:45.312 "timeout_admin_us": 0, 00:20:45.312 "keep_alive_timeout_ms": 10000, 00:20:45.312 "arbitration_burst": 0, 00:20:45.312 "low_priority_weight": 0, 00:20:45.312 "medium_priority_weight": 0, 00:20:45.312 "high_priority_weight": 0, 00:20:45.312 "nvme_adminq_poll_period_us": 10000, 00:20:45.312 "nvme_ioq_poll_period_us": 0, 00:20:45.312 "io_queue_requests": 0, 00:20:45.312 "delay_cmd_submit": true, 00:20:45.312 "transport_retry_count": 4, 00:20:45.312 "bdev_retry_count": 3, 00:20:45.312 "transport_ack_timeout": 0, 00:20:45.312 "ctrlr_loss_timeout_sec": 0, 00:20:45.312 "reconnect_delay_sec": 0, 00:20:45.312 "fast_io_fail_timeout_sec": 0, 00:20:45.312 "disable_auto_failback": false, 00:20:45.312 "generate_uuids": false, 00:20:45.312 "transport_tos": 0, 00:20:45.312 "nvme_error_stat": false, 00:20:45.312 "rdma_srq_size": 0, 00:20:45.312 "io_path_stat": false, 00:20:45.312 "allow_accel_sequence": false, 00:20:45.312 "rdma_max_cq_size": 0, 00:20:45.312 "rdma_cm_event_timeout_ms": 0, 00:20:45.312 "dhchap_digests": [ 00:20:45.312 "sha256", 00:20:45.312 "sha384", 00:20:45.312 "sha512" 00:20:45.312 ], 00:20:45.312 "dhchap_dhgroups": [ 00:20:45.312 "null", 00:20:45.312 "ffdhe2048", 00:20:45.312 "ffdhe3072", 00:20:45.312 "ffdhe4096", 00:20:45.312 "ffdhe6144", 00:20:45.312 "ffdhe8192" 00:20:45.312 ] 00:20:45.312 } 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "method": "bdev_nvme_set_hotplug", 00:20:45.312 "params": { 00:20:45.312 "period_us": 100000, 00:20:45.312 "enable": false 00:20:45.312 } 00:20:45.312 }, 00:20:45.312 { 00:20:45.312 "method": "bdev_malloc_create", 00:20:45.312 "params": { 00:20:45.312 "name": "malloc0", 00:20:45.312 "num_blocks": 8192, 00:20:45.312 "block_size": 4096, 00:20:45.312 "physical_block_size": 4096, 00:20:45.312 "uuid": "b3292a8c-029c-4149-84a0-914031b6624f", 00:20:45.313 "optimal_io_boundary": 0, 00:20:45.313 "md_size": 0, 00:20:45.313 "dif_type": 0, 00:20:45.313 "dif_is_head_of_md": false, 00:20:45.313 "dif_pi_format": 0 00:20:45.313 } 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "method": "bdev_wait_for_examine" 00:20:45.313 } 00:20:45.313 ] 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "subsystem": "nbd", 00:20:45.313 "config": [] 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "subsystem": "scheduler", 00:20:45.313 "config": [ 00:20:45.313 { 00:20:45.313 "method": "framework_set_scheduler", 00:20:45.313 "params": { 00:20:45.313 "name": "static" 00:20:45.313 } 00:20:45.313 } 00:20:45.313 ] 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "subsystem": "nvmf", 00:20:45.313 "config": [ 00:20:45.313 { 00:20:45.313 "method": "nvmf_set_config", 00:20:45.313 "params": { 00:20:45.313 "discovery_filter": "match_any", 00:20:45.313 "admin_cmd_passthru": { 00:20:45.313 "identify_ctrlr": false 00:20:45.313 }, 00:20:45.313 "dhchap_digests": [ 00:20:45.313 "sha256", 00:20:45.313 "sha384", 00:20:45.313 "sha512" 00:20:45.313 ], 00:20:45.313 "dhchap_dhgroups": [ 00:20:45.313 "null", 00:20:45.313 "ffdhe2048", 00:20:45.313 "ffdhe3072", 00:20:45.313 "ffdhe4096", 00:20:45.313 "ffdhe6144", 00:20:45.313 "ffdhe8192" 00:20:45.313 ] 00:20:45.313 } 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "method": "nvmf_set_max_subsystems", 00:20:45.313 "params": { 00:20:45.313 "max_subsystems": 1024 00:20:45.313 } 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "method": "nvmf_set_crdt", 00:20:45.313 "params": { 00:20:45.313 "crdt1": 0, 00:20:45.313 "crdt2": 0, 00:20:45.313 "crdt3": 0 00:20:45.313 } 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "method": "nvmf_create_transport", 00:20:45.313 "params": { 00:20:45.313 "trtype": "TCP", 00:20:45.313 "max_queue_depth": 128, 00:20:45.313 "max_io_qpairs_per_ctrlr": 127, 00:20:45.313 "in_capsule_data_size": 4096, 00:20:45.313 "max_io_size": 131072, 00:20:45.313 "io_unit_size": 131072, 00:20:45.313 "max_aq_depth": 128, 00:20:45.313 "num_shared_buffers": 511, 00:20:45.313 "buf_cache_size": 4294967295, 00:20:45.313 "dif_insert_or_strip": false, 00:20:45.313 "zcopy": false, 00:20:45.313 "c2h_success": false, 00:20:45.313 "sock_priority": 0, 00:20:45.313 "abort_timeout_sec": 1, 00:20:45.313 "ack_timeout": 0, 00:20:45.313 "data_wr_pool_size": 0 00:20:45.313 } 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "method": "nvmf_create_subsystem", 00:20:45.313 "params": { 00:20:45.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.313 "allow_any_host": false, 00:20:45.313 "serial_number": "SPDK00000000000001", 00:20:45.313 "model_number": "SPDK bdev Controller", 00:20:45.313 "max_namespaces": 10, 00:20:45.313 "min_cntlid": 1, 00:20:45.313 "max_cntlid": 65519, 00:20:45.313 "ana_reporting": false 00:20:45.313 } 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "method": "nvmf_subsystem_add_host", 00:20:45.313 "params": { 00:20:45.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.313 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.313 "psk": "key0" 00:20:45.313 } 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "method": "nvmf_subsystem_add_ns", 00:20:45.313 "params": { 00:20:45.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.313 "namespace": { 00:20:45.313 "nsid": 1, 00:20:45.313 "bdev_name": "malloc0", 00:20:45.313 "nguid": "B3292A8C029C414984A0914031B6624F", 00:20:45.313 "uuid": "b3292a8c-029c-4149-84a0-914031b6624f", 00:20:45.313 "no_auto_visible": false 00:20:45.313 } 00:20:45.313 } 00:20:45.313 }, 00:20:45.313 { 00:20:45.313 "method": "nvmf_subsystem_add_listener", 00:20:45.313 "params": { 00:20:45.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.313 "listen_address": { 00:20:45.313 "trtype": "TCP", 00:20:45.313 "adrfam": "IPv4", 00:20:45.313 "traddr": "10.0.0.2", 00:20:45.313 "trsvcid": "4420" 00:20:45.313 }, 00:20:45.313 "secure_channel": true 00:20:45.313 } 00:20:45.313 } 00:20:45.313 ] 00:20:45.313 } 00:20:45.313 ] 00:20:45.313 }' 00:20:45.313 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:45.573 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:45.573 "subsystems": [ 00:20:45.573 { 00:20:45.573 "subsystem": "keyring", 00:20:45.573 "config": [ 00:20:45.573 { 00:20:45.573 "method": "keyring_file_add_key", 00:20:45.573 "params": { 00:20:45.573 "name": "key0", 00:20:45.573 "path": "/tmp/tmp.Zsr9vr0dJz" 00:20:45.573 } 00:20:45.573 } 00:20:45.573 ] 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "subsystem": "iobuf", 00:20:45.573 "config": [ 00:20:45.573 { 00:20:45.573 "method": "iobuf_set_options", 00:20:45.573 "params": { 00:20:45.573 "small_pool_count": 8192, 00:20:45.573 "large_pool_count": 1024, 00:20:45.573 "small_bufsize": 8192, 00:20:45.573 "large_bufsize": 135168, 00:20:45.573 "enable_numa": false 00:20:45.573 } 00:20:45.573 } 00:20:45.573 ] 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "subsystem": "sock", 00:20:45.573 "config": [ 00:20:45.573 { 00:20:45.573 "method": "sock_set_default_impl", 00:20:45.573 "params": { 00:20:45.573 "impl_name": "posix" 00:20:45.573 } 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "method": "sock_impl_set_options", 00:20:45.573 "params": { 00:20:45.573 "impl_name": "ssl", 00:20:45.573 "recv_buf_size": 4096, 00:20:45.573 "send_buf_size": 4096, 00:20:45.573 "enable_recv_pipe": true, 00:20:45.573 "enable_quickack": false, 00:20:45.573 "enable_placement_id": 0, 00:20:45.573 "enable_zerocopy_send_server": true, 00:20:45.573 "enable_zerocopy_send_client": false, 00:20:45.573 "zerocopy_threshold": 0, 00:20:45.573 "tls_version": 0, 00:20:45.573 "enable_ktls": false 00:20:45.573 } 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "method": "sock_impl_set_options", 00:20:45.573 "params": { 00:20:45.573 "impl_name": "posix", 00:20:45.573 "recv_buf_size": 2097152, 00:20:45.573 "send_buf_size": 2097152, 00:20:45.573 "enable_recv_pipe": true, 00:20:45.573 "enable_quickack": false, 00:20:45.573 "enable_placement_id": 0, 00:20:45.573 "enable_zerocopy_send_server": true, 00:20:45.573 "enable_zerocopy_send_client": false, 00:20:45.573 "zerocopy_threshold": 0, 00:20:45.573 "tls_version": 0, 00:20:45.573 "enable_ktls": false 00:20:45.573 } 00:20:45.573 } 00:20:45.573 ] 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "subsystem": "vmd", 00:20:45.573 "config": [] 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "subsystem": "accel", 00:20:45.573 "config": [ 00:20:45.573 { 00:20:45.573 "method": "accel_set_options", 00:20:45.573 "params": { 00:20:45.573 "small_cache_size": 128, 00:20:45.573 "large_cache_size": 16, 00:20:45.573 "task_count": 2048, 00:20:45.573 "sequence_count": 2048, 00:20:45.573 "buf_count": 2048 00:20:45.573 } 00:20:45.573 } 00:20:45.573 ] 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "subsystem": "bdev", 00:20:45.573 "config": [ 00:20:45.573 { 00:20:45.573 "method": "bdev_set_options", 00:20:45.573 "params": { 00:20:45.573 "bdev_io_pool_size": 65535, 00:20:45.573 "bdev_io_cache_size": 256, 00:20:45.573 "bdev_auto_examine": true, 00:20:45.573 "iobuf_small_cache_size": 128, 00:20:45.573 "iobuf_large_cache_size": 16 00:20:45.573 } 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "method": "bdev_raid_set_options", 00:20:45.573 "params": { 00:20:45.573 "process_window_size_kb": 1024, 00:20:45.573 "process_max_bandwidth_mb_sec": 0 00:20:45.573 } 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "method": "bdev_iscsi_set_options", 00:20:45.573 "params": { 00:20:45.573 "timeout_sec": 30 00:20:45.573 } 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "method": "bdev_nvme_set_options", 00:20:45.573 "params": { 00:20:45.573 "action_on_timeout": "none", 00:20:45.573 "timeout_us": 0, 00:20:45.573 "timeout_admin_us": 0, 00:20:45.573 "keep_alive_timeout_ms": 10000, 00:20:45.573 "arbitration_burst": 0, 00:20:45.573 "low_priority_weight": 0, 00:20:45.573 "medium_priority_weight": 0, 00:20:45.573 "high_priority_weight": 0, 00:20:45.573 "nvme_adminq_poll_period_us": 10000, 00:20:45.573 "nvme_ioq_poll_period_us": 0, 00:20:45.573 "io_queue_requests": 512, 00:20:45.573 "delay_cmd_submit": true, 00:20:45.573 "transport_retry_count": 4, 00:20:45.573 "bdev_retry_count": 3, 00:20:45.573 "transport_ack_timeout": 0, 00:20:45.573 "ctrlr_loss_timeout_sec": 0, 00:20:45.573 "reconnect_delay_sec": 0, 00:20:45.573 "fast_io_fail_timeout_sec": 0, 00:20:45.573 "disable_auto_failback": false, 00:20:45.573 "generate_uuids": false, 00:20:45.573 "transport_tos": 0, 00:20:45.573 "nvme_error_stat": false, 00:20:45.573 "rdma_srq_size": 0, 00:20:45.573 "io_path_stat": false, 00:20:45.573 "allow_accel_sequence": false, 00:20:45.573 "rdma_max_cq_size": 0, 00:20:45.573 "rdma_cm_event_timeout_ms": 0, 00:20:45.573 "dhchap_digests": [ 00:20:45.573 "sha256", 00:20:45.573 "sha384", 00:20:45.573 "sha512" 00:20:45.573 ], 00:20:45.573 "dhchap_dhgroups": [ 00:20:45.573 "null", 00:20:45.573 "ffdhe2048", 00:20:45.573 "ffdhe3072", 00:20:45.573 "ffdhe4096", 00:20:45.573 "ffdhe6144", 00:20:45.573 "ffdhe8192" 00:20:45.573 ] 00:20:45.573 } 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "method": "bdev_nvme_attach_controller", 00:20:45.573 "params": { 00:20:45.573 "name": "TLSTEST", 00:20:45.573 "trtype": "TCP", 00:20:45.573 "adrfam": "IPv4", 00:20:45.573 "traddr": "10.0.0.2", 00:20:45.573 "trsvcid": "4420", 00:20:45.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.573 "prchk_reftag": false, 00:20:45.573 "prchk_guard": false, 00:20:45.573 "ctrlr_loss_timeout_sec": 0, 00:20:45.573 "reconnect_delay_sec": 0, 00:20:45.573 "fast_io_fail_timeout_sec": 0, 00:20:45.573 "psk": "key0", 00:20:45.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.573 "hdgst": false, 00:20:45.573 "ddgst": false, 00:20:45.573 "multipath": "multipath" 00:20:45.573 } 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "method": "bdev_nvme_set_hotplug", 00:20:45.573 "params": { 00:20:45.573 "period_us": 100000, 00:20:45.573 "enable": false 00:20:45.573 } 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "method": "bdev_wait_for_examine" 00:20:45.573 } 00:20:45.573 ] 00:20:45.573 }, 00:20:45.573 { 00:20:45.573 "subsystem": "nbd", 00:20:45.573 "config": [] 00:20:45.573 } 00:20:45.573 ] 00:20:45.573 }' 00:20:45.573 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3926036 00:20:45.573 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3926036 ']' 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3926036 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3926036 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3926036' 00:20:45.574 killing process with pid 3926036 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3926036 00:20:45.574 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.574 00:20:45.574 Latency(us) 00:20:45.574 [2024-11-20T13:40:52.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.574 [2024-11-20T13:40:52.634Z] =================================================================================================================== 00:20:45.574 [2024-11-20T13:40:52.634Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3926036 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3925806 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3925806 ']' 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3925806 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3925806 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3925806' 00:20:45.574 killing process with pid 3925806 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3925806 00:20:45.574 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3925806 00:20:45.834 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:45.834 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.834 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.834 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.834 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:45.834 "subsystems": [ 00:20:45.834 { 00:20:45.834 "subsystem": "keyring", 00:20:45.834 "config": [ 00:20:45.834 { 00:20:45.834 "method": "keyring_file_add_key", 00:20:45.834 "params": { 00:20:45.834 "name": "key0", 00:20:45.834 "path": "/tmp/tmp.Zsr9vr0dJz" 00:20:45.834 } 00:20:45.834 } 00:20:45.834 ] 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "subsystem": "iobuf", 00:20:45.834 "config": [ 00:20:45.834 { 00:20:45.834 "method": "iobuf_set_options", 00:20:45.834 "params": { 00:20:45.834 "small_pool_count": 8192, 00:20:45.834 "large_pool_count": 1024, 00:20:45.834 "small_bufsize": 8192, 00:20:45.834 "large_bufsize": 135168, 00:20:45.834 "enable_numa": false 00:20:45.834 } 00:20:45.834 } 00:20:45.834 ] 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "subsystem": "sock", 00:20:45.834 "config": [ 00:20:45.834 { 00:20:45.834 "method": "sock_set_default_impl", 00:20:45.834 "params": { 00:20:45.834 "impl_name": "posix" 00:20:45.834 } 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "method": "sock_impl_set_options", 00:20:45.834 "params": { 00:20:45.834 "impl_name": "ssl", 00:20:45.834 "recv_buf_size": 4096, 00:20:45.834 "send_buf_size": 4096, 00:20:45.834 "enable_recv_pipe": true, 00:20:45.834 "enable_quickack": false, 00:20:45.834 "enable_placement_id": 0, 00:20:45.834 "enable_zerocopy_send_server": true, 00:20:45.834 "enable_zerocopy_send_client": false, 00:20:45.834 "zerocopy_threshold": 0, 00:20:45.834 "tls_version": 0, 00:20:45.834 "enable_ktls": false 00:20:45.834 } 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "method": "sock_impl_set_options", 00:20:45.834 "params": { 00:20:45.834 "impl_name": "posix", 00:20:45.834 "recv_buf_size": 2097152, 00:20:45.834 "send_buf_size": 2097152, 00:20:45.834 "enable_recv_pipe": true, 00:20:45.834 "enable_quickack": false, 00:20:45.834 "enable_placement_id": 0, 00:20:45.834 "enable_zerocopy_send_server": true, 00:20:45.834 "enable_zerocopy_send_client": false, 00:20:45.834 "zerocopy_threshold": 0, 00:20:45.834 "tls_version": 0, 00:20:45.834 "enable_ktls": false 00:20:45.834 } 00:20:45.834 } 00:20:45.834 ] 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "subsystem": "vmd", 00:20:45.834 "config": [] 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "subsystem": "accel", 00:20:45.834 "config": [ 00:20:45.834 { 00:20:45.834 "method": "accel_set_options", 00:20:45.834 "params": { 00:20:45.834 "small_cache_size": 128, 00:20:45.834 "large_cache_size": 16, 00:20:45.834 "task_count": 2048, 00:20:45.834 "sequence_count": 2048, 00:20:45.834 "buf_count": 2048 00:20:45.834 } 00:20:45.834 } 00:20:45.834 ] 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "subsystem": "bdev", 00:20:45.834 "config": [ 00:20:45.834 { 00:20:45.834 "method": "bdev_set_options", 00:20:45.834 "params": { 00:20:45.834 "bdev_io_pool_size": 65535, 00:20:45.834 "bdev_io_cache_size": 256, 00:20:45.834 "bdev_auto_examine": true, 00:20:45.834 "iobuf_small_cache_size": 128, 00:20:45.834 "iobuf_large_cache_size": 16 00:20:45.834 } 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "method": "bdev_raid_set_options", 00:20:45.834 "params": { 00:20:45.834 "process_window_size_kb": 1024, 00:20:45.834 "process_max_bandwidth_mb_sec": 0 00:20:45.834 } 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "method": "bdev_iscsi_set_options", 00:20:45.834 "params": { 00:20:45.834 "timeout_sec": 30 00:20:45.834 } 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "method": "bdev_nvme_set_options", 00:20:45.834 "params": { 00:20:45.834 "action_on_timeout": "none", 00:20:45.834 "timeout_us": 0, 00:20:45.834 "timeout_admin_us": 0, 00:20:45.834 "keep_alive_timeout_ms": 10000, 00:20:45.834 "arbitration_burst": 0, 00:20:45.834 "low_priority_weight": 0, 00:20:45.834 "medium_priority_weight": 0, 00:20:45.834 "high_priority_weight": 0, 00:20:45.834 "nvme_adminq_poll_period_us": 10000, 00:20:45.834 "nvme_ioq_poll_period_us": 0, 00:20:45.834 "io_queue_requests": 0, 00:20:45.834 "delay_cmd_submit": true, 00:20:45.834 "transport_retry_count": 4, 00:20:45.834 "bdev_retry_count": 3, 00:20:45.834 "transport_ack_timeout": 0, 00:20:45.834 "ctrlr_loss_timeout_sec": 0, 00:20:45.834 "reconnect_delay_sec": 0, 00:20:45.834 "fast_io_fail_timeout_sec": 0, 00:20:45.834 "disable_auto_failback": false, 00:20:45.834 "generate_uuids": false, 00:20:45.834 "transport_tos": 0, 00:20:45.834 "nvme_error_stat": false, 00:20:45.834 "rdma_srq_size": 0, 00:20:45.834 "io_path_stat": false, 00:20:45.834 "allow_accel_sequence": false, 00:20:45.834 "rdma_max_cq_size": 0, 00:20:45.834 "rdma_cm_event_timeout_ms": 0, 00:20:45.834 "dhchap_digests": [ 00:20:45.834 "sha256", 00:20:45.834 "sha384", 00:20:45.834 "sha512" 00:20:45.834 ], 00:20:45.834 "dhchap_dhgroups": [ 00:20:45.834 "null", 00:20:45.834 "ffdhe2048", 00:20:45.834 "ffdhe3072", 00:20:45.834 "ffdhe4096", 00:20:45.834 "ffdhe6144", 00:20:45.834 "ffdhe8192" 00:20:45.834 ] 00:20:45.834 } 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "method": "bdev_nvme_set_hotplug", 00:20:45.834 "params": { 00:20:45.834 "period_us": 100000, 00:20:45.834 "enable": false 00:20:45.834 } 00:20:45.834 }, 00:20:45.834 { 00:20:45.834 "method": "bdev_malloc_create", 00:20:45.834 "params": { 00:20:45.834 "name": "malloc0", 00:20:45.834 "num_blocks": 8192, 00:20:45.834 "block_size": 4096, 00:20:45.834 "physical_block_size": 4096, 00:20:45.835 "uuid": "b3292a8c-029c-4149-84a0-914031b6624f", 00:20:45.835 "optimal_io_boundary": 0, 00:20:45.835 "md_size": 0, 00:20:45.835 "dif_type": 0, 00:20:45.835 "dif_is_head_of_md": false, 00:20:45.835 "dif_pi_format": 0 00:20:45.835 } 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "method": "bdev_wait_for_examine" 00:20:45.835 } 00:20:45.835 ] 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "subsystem": "nbd", 00:20:45.835 "config": [] 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "subsystem": "scheduler", 00:20:45.835 "config": [ 00:20:45.835 { 00:20:45.835 "method": "framework_set_scheduler", 00:20:45.835 "params": { 00:20:45.835 "name": "static" 00:20:45.835 } 00:20:45.835 } 00:20:45.835 ] 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "subsystem": "nvmf", 00:20:45.835 "config": [ 00:20:45.835 { 00:20:45.835 "method": "nvmf_set_config", 00:20:45.835 "params": { 00:20:45.835 "discovery_filter": "match_any", 00:20:45.835 "admin_cmd_passthru": { 00:20:45.835 "identify_ctrlr": false 00:20:45.835 }, 00:20:45.835 "dhchap_digests": [ 00:20:45.835 "sha256", 00:20:45.835 "sha384", 00:20:45.835 "sha512" 00:20:45.835 ], 00:20:45.835 "dhchap_dhgroups": [ 00:20:45.835 "null", 00:20:45.835 "ffdhe2048", 00:20:45.835 "ffdhe3072", 00:20:45.835 "ffdhe4096", 00:20:45.835 "ffdhe6144", 00:20:45.835 "ffdhe8192" 00:20:45.835 ] 00:20:45.835 } 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "method": "nvmf_set_max_subsystems", 00:20:45.835 "params": { 00:20:45.835 "max_subsystems": 1024 00:20:45.835 } 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "method": "nvmf_set_crdt", 00:20:45.835 "params": { 00:20:45.835 "crdt1": 0, 00:20:45.835 "crdt2": 0, 00:20:45.835 "crdt3": 0 00:20:45.835 } 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "method": "nvmf_create_transport", 00:20:45.835 "params": { 00:20:45.835 "trtype": "TCP", 00:20:45.835 "max_queue_depth": 128, 00:20:45.835 "max_io_qpairs_per_ctrlr": 127, 00:20:45.835 "in_capsule_data_size": 4096, 00:20:45.835 "max_io_size": 131072, 00:20:45.835 "io_unit_size": 131072, 00:20:45.835 "max_aq_depth": 128, 00:20:45.835 "num_shared_buffers": 511, 00:20:45.835 "buf_cache_size": 4294967295, 00:20:45.835 "dif_insert_or_strip": false, 00:20:45.835 "zcopy": false, 00:20:45.835 "c2h_success": false, 00:20:45.835 "sock_priority": 0, 00:20:45.835 "abort_timeout_sec": 1, 00:20:45.835 "ack_timeout": 0, 00:20:45.835 "data_wr_pool_size": 0 00:20:45.835 } 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "method": "nvmf_create_subsystem", 00:20:45.835 "params": { 00:20:45.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.835 "allow_any_host": false, 00:20:45.835 "serial_number": "SPDK00000000000001", 00:20:45.835 "model_number": "SPDK bdev Controller", 00:20:45.835 "max_namespaces": 10, 00:20:45.835 "min_cntlid": 1, 00:20:45.835 "max_cntlid": 65519, 00:20:45.835 "ana_reporting": false 00:20:45.835 } 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "method": "nvmf_subsystem_add_host", 00:20:45.835 "params": { 00:20:45.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.835 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.835 "psk": "key0" 00:20:45.835 } 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "method": "nvmf_subsystem_add_ns", 00:20:45.835 "params": { 00:20:45.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.835 "namespace": { 00:20:45.835 "nsid": 1, 00:20:45.835 "bdev_name": "malloc0", 00:20:45.835 "nguid": "B3292A8C029C414984A0914031B6624F", 00:20:45.835 "uuid": "b3292a8c-029c-4149-84a0-914031b6624f", 00:20:45.835 "no_auto_visible": false 00:20:45.835 } 00:20:45.835 } 00:20:45.835 }, 00:20:45.835 { 00:20:45.835 "method": "nvmf_subsystem_add_listener", 00:20:45.835 "params": { 00:20:45.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.835 "listen_address": { 00:20:45.835 "trtype": "TCP", 00:20:45.835 "adrfam": "IPv4", 00:20:45.835 "traddr": "10.0.0.2", 00:20:45.835 "trsvcid": "4420" 00:20:45.835 }, 00:20:45.835 "secure_channel": true 00:20:45.835 } 00:20:45.835 } 00:20:45.835 ] 00:20:45.835 } 00:20:45.835 ] 00:20:45.835 }' 00:20:45.835 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3926278 00:20:45.835 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3926278 00:20:45.835 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3926278 ']' 00:20:45.835 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.835 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.835 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.835 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:45.835 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.835 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.835 [2024-11-20 14:40:52.769969] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:45.835 [2024-11-20 14:40:52.770013] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.835 [2024-11-20 14:40:52.829720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.835 [2024-11-20 14:40:52.858145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.835 [2024-11-20 14:40:52.858172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.835 [2024-11-20 14:40:52.858178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.835 [2024-11-20 14:40:52.858182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.835 [2024-11-20 14:40:52.858187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.835 [2024-11-20 14:40:52.858679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.096 [2024-11-20 14:40:53.052675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.096 [2024-11-20 14:40:53.084702] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.096 [2024-11-20 14:40:53.084909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.665 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.665 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:46.665 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.665 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:46.665 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.665 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.665 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3926562 00:20:46.665 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3926562 /var/tmp/bdevperf.sock 00:20:46.665 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3926562 ']' 00:20:46.666 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.666 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.666 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.666 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.666 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.666 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:46.666 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:46.666 "subsystems": [ 00:20:46.666 { 00:20:46.666 "subsystem": "keyring", 00:20:46.666 "config": [ 00:20:46.666 { 00:20:46.666 "method": "keyring_file_add_key", 00:20:46.666 "params": { 00:20:46.666 "name": "key0", 00:20:46.666 "path": "/tmp/tmp.Zsr9vr0dJz" 00:20:46.666 } 00:20:46.666 } 00:20:46.666 ] 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "subsystem": "iobuf", 00:20:46.666 "config": [ 00:20:46.666 { 00:20:46.666 "method": "iobuf_set_options", 00:20:46.666 "params": { 00:20:46.666 "small_pool_count": 8192, 00:20:46.666 "large_pool_count": 1024, 00:20:46.666 "small_bufsize": 8192, 00:20:46.666 "large_bufsize": 135168, 00:20:46.666 "enable_numa": false 00:20:46.666 } 00:20:46.666 } 00:20:46.666 ] 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "subsystem": "sock", 00:20:46.666 "config": [ 00:20:46.666 { 00:20:46.666 "method": "sock_set_default_impl", 00:20:46.666 "params": { 00:20:46.666 "impl_name": "posix" 00:20:46.666 } 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "method": "sock_impl_set_options", 00:20:46.666 "params": { 00:20:46.666 "impl_name": "ssl", 00:20:46.666 "recv_buf_size": 4096, 00:20:46.666 "send_buf_size": 4096, 00:20:46.666 "enable_recv_pipe": true, 00:20:46.666 "enable_quickack": false, 00:20:46.666 "enable_placement_id": 0, 00:20:46.666 "enable_zerocopy_send_server": true, 00:20:46.666 "enable_zerocopy_send_client": false, 00:20:46.666 "zerocopy_threshold": 0, 00:20:46.666 "tls_version": 0, 00:20:46.666 "enable_ktls": false 00:20:46.666 } 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "method": "sock_impl_set_options", 00:20:46.666 "params": { 00:20:46.666 "impl_name": "posix", 00:20:46.666 "recv_buf_size": 2097152, 00:20:46.666 "send_buf_size": 2097152, 00:20:46.666 "enable_recv_pipe": true, 00:20:46.666 "enable_quickack": false, 00:20:46.666 "enable_placement_id": 0, 00:20:46.666 "enable_zerocopy_send_server": true, 00:20:46.666 "enable_zerocopy_send_client": false, 00:20:46.666 "zerocopy_threshold": 0, 00:20:46.666 "tls_version": 0, 00:20:46.666 "enable_ktls": false 00:20:46.666 } 00:20:46.666 } 00:20:46.666 ] 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "subsystem": "vmd", 00:20:46.666 "config": [] 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "subsystem": "accel", 00:20:46.666 "config": [ 00:20:46.666 { 00:20:46.666 "method": "accel_set_options", 00:20:46.666 "params": { 00:20:46.666 "small_cache_size": 128, 00:20:46.666 "large_cache_size": 16, 00:20:46.666 "task_count": 2048, 00:20:46.666 "sequence_count": 2048, 00:20:46.666 "buf_count": 2048 00:20:46.666 } 00:20:46.666 } 00:20:46.666 ] 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "subsystem": "bdev", 00:20:46.666 "config": [ 00:20:46.666 { 00:20:46.666 "method": "bdev_set_options", 00:20:46.666 "params": { 00:20:46.666 "bdev_io_pool_size": 65535, 00:20:46.666 "bdev_io_cache_size": 256, 00:20:46.666 "bdev_auto_examine": true, 00:20:46.666 "iobuf_small_cache_size": 128, 00:20:46.666 "iobuf_large_cache_size": 16 00:20:46.666 } 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "method": "bdev_raid_set_options", 00:20:46.666 "params": { 00:20:46.666 "process_window_size_kb": 1024, 00:20:46.666 "process_max_bandwidth_mb_sec": 0 00:20:46.666 } 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "method": "bdev_iscsi_set_options", 00:20:46.666 "params": { 00:20:46.666 "timeout_sec": 30 00:20:46.666 } 00:20:46.666 }, 00:20:46.666 { 00:20:46.666 "method": "bdev_nvme_set_options", 00:20:46.666 "params": { 00:20:46.666 "action_on_timeout": "none", 00:20:46.666 "timeout_us": 0, 00:20:46.666 "timeout_admin_us": 0, 00:20:46.666 "keep_alive_timeout_ms": 10000, 00:20:46.666 "arbitration_burst": 0, 00:20:46.666 "low_priority_weight": 0, 00:20:46.666 "medium_priority_weight": 0, 00:20:46.666 "high_priority_weight": 0, 00:20:46.666 "nvme_adminq_poll_period_us": 10000, 00:20:46.666 "nvme_ioq_poll_period_us": 0, 00:20:46.666 "io_queue_requests": 512, 00:20:46.666 "delay_cmd_submit": true, 00:20:46.666 "transport_retry_count": 4, 00:20:46.666 "bdev_retry_count": 3, 00:20:46.666 "transport_ack_timeout": 0, 00:20:46.666 "ctrlr_loss_timeout_sec": 0, 00:20:46.666 "reconnect_delay_sec": 0, 00:20:46.666 "fast_io_fail_timeout_sec": 0, 00:20:46.666 "disable_auto_failback": false, 00:20:46.666 "generate_uuids": false, 00:20:46.666 "transport_tos": 0, 00:20:46.666 "nvme_error_stat": false, 00:20:46.666 "rdma_srq_size": 0, 00:20:46.666 "io_path_stat": false, 00:20:46.666 "allow_accel_sequence": false, 00:20:46.667 "rdma_max_cq_size": 0, 00:20:46.667 "rdma_cm_event_timeout_ms": 0, 00:20:46.667 "dhchap_digests": [ 00:20:46.667 "sha256", 00:20:46.667 "sha384", 00:20:46.667 "sha512" 00:20:46.667 ], 00:20:46.667 "dhchap_dhgroups": [ 00:20:46.667 "null", 00:20:46.667 "ffdhe2048", 00:20:46.667 "ffdhe3072", 00:20:46.667 "ffdhe4096", 00:20:46.667 "ffdhe6144", 00:20:46.667 "ffdhe8192" 00:20:46.667 ] 00:20:46.667 } 00:20:46.667 }, 00:20:46.667 { 00:20:46.667 "method": "bdev_nvme_attach_controller", 00:20:46.667 "params": { 00:20:46.667 "name": "TLSTEST", 00:20:46.667 "trtype": "TCP", 00:20:46.667 "adrfam": "IPv4", 00:20:46.667 "traddr": "10.0.0.2", 00:20:46.667 "trsvcid": "4420", 00:20:46.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.667 "prchk_reftag": false, 00:20:46.667 "prchk_guard": false, 00:20:46.667 "ctrlr_loss_timeout_sec": 0, 00:20:46.667 "reconnect_delay_sec": 0, 00:20:46.667 "fast_io_fail_timeout_sec": 0, 00:20:46.667 "psk": "key0", 00:20:46.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.667 "hdgst": false, 00:20:46.667 "ddgst": false, 00:20:46.667 "multipath": "multipath" 00:20:46.667 } 00:20:46.667 }, 00:20:46.667 { 00:20:46.667 "method": "bdev_nvme_set_hotplug", 00:20:46.667 "params": { 00:20:46.667 "period_us": 100000, 00:20:46.667 "enable": false 00:20:46.667 } 00:20:46.667 }, 00:20:46.667 { 00:20:46.667 "method": "bdev_wait_for_examine" 00:20:46.667 } 00:20:46.667 ] 00:20:46.667 }, 00:20:46.667 { 00:20:46.667 "subsystem": "nbd", 00:20:46.667 "config": [] 00:20:46.667 } 00:20:46.667 ] 00:20:46.667 }' 00:20:46.667 [2024-11-20 14:40:53.605875] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:46.667 [2024-11-20 14:40:53.605928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926562 ] 00:20:46.667 [2024-11-20 14:40:53.671183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.667 [2024-11-20 14:40:53.700414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.926 [2024-11-20 14:40:53.835323] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.493 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.493 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.493 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:47.493 Running I/O for 10 seconds... 00:20:49.808 4152.00 IOPS, 16.22 MiB/s [2024-11-20T13:40:57.806Z] 4307.00 IOPS, 16.82 MiB/s [2024-11-20T13:40:58.742Z] 4841.00 IOPS, 18.91 MiB/s [2024-11-20T13:40:59.678Z] 4869.00 IOPS, 19.02 MiB/s [2024-11-20T13:41:00.616Z] 4772.60 IOPS, 18.64 MiB/s [2024-11-20T13:41:01.552Z] 4739.17 IOPS, 18.51 MiB/s [2024-11-20T13:41:02.491Z] 4823.00 IOPS, 18.84 MiB/s [2024-11-20T13:41:03.870Z] 4751.88 IOPS, 18.56 MiB/s [2024-11-20T13:41:04.809Z] 4694.89 IOPS, 18.34 MiB/s [2024-11-20T13:41:04.809Z] 4668.50 IOPS, 18.24 MiB/s 00:20:57.749 Latency(us) 00:20:57.749 [2024-11-20T13:41:04.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.749 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.749 Verification LBA range: start 0x0 length 0x2000 00:20:57.749 TLSTESTn1 : 10.01 4676.66 18.27 0.00 0.00 27339.67 3604.48 24576.00 00:20:57.749 [2024-11-20T13:41:04.809Z] =================================================================================================================== 00:20:57.749 [2024-11-20T13:41:04.809Z] Total : 4676.66 18.27 0.00 0.00 27339.67 3604.48 24576.00 00:20:57.749 { 00:20:57.749 "results": [ 00:20:57.749 { 00:20:57.749 "job": "TLSTESTn1", 00:20:57.749 "core_mask": "0x4", 00:20:57.749 "workload": "verify", 00:20:57.749 "status": "finished", 00:20:57.749 "verify_range": { 00:20:57.749 "start": 0, 00:20:57.749 "length": 8192 00:20:57.749 }, 00:20:57.749 "queue_depth": 128, 00:20:57.749 "io_size": 4096, 00:20:57.749 "runtime": 10.009715, 00:20:57.749 "iops": 4676.656628085814, 00:20:57.749 "mibps": 18.26818995346021, 00:20:57.749 "io_failed": 0, 00:20:57.749 "io_timeout": 0, 00:20:57.749 "avg_latency_us": 27339.670680452306, 00:20:57.749 "min_latency_us": 3604.48, 00:20:57.749 "max_latency_us": 24576.0 00:20:57.749 } 00:20:57.749 ], 00:20:57.749 "core_count": 1 00:20:57.750 } 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3926562 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3926562 ']' 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3926562 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3926562 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3926562' 00:20:57.750 killing process with pid 3926562 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3926562 00:20:57.750 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.750 00:20:57.750 Latency(us) 00:20:57.750 [2024-11-20T13:41:04.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.750 [2024-11-20T13:41:04.810Z] =================================================================================================================== 00:20:57.750 [2024-11-20T13:41:04.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3926562 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3926278 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3926278 ']' 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3926278 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3926278 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3926278' 00:20:57.750 killing process with pid 3926278 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3926278 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3926278 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3928908 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3928908 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3928908 ']' 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.750 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:58.010 [2024-11-20 14:41:04.826506] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:58.010 [2024-11-20 14:41:04.826546] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.010 [2024-11-20 14:41:04.894508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.010 [2024-11-20 14:41:04.922686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.011 [2024-11-20 14:41:04.922717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.011 [2024-11-20 14:41:04.922723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.011 [2024-11-20 14:41:04.922727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.011 [2024-11-20 14:41:04.922732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.011 [2024-11-20 14:41:04.923171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.011 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.011 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:58.011 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.011 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.011 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.011 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.011 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Zsr9vr0dJz 00:20:58.011 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Zsr9vr0dJz 00:20:58.011 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:58.270 [2024-11-20 14:41:05.163534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.270 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:58.530 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:58.530 [2024-11-20 14:41:05.484332] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:58.530 [2024-11-20 14:41:05.484606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.530 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:58.790 malloc0 00:20:58.790 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:59.050 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Zsr9vr0dJz 00:20:59.050 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3929267 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3929267 /var/tmp/bdevperf.sock 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3929267 ']' 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.309 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.309 [2024-11-20 14:41:06.254046] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:20:59.309 [2024-11-20 14:41:06.254117] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929267 ] 00:20:59.309 [2024-11-20 14:41:06.326320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.309 [2024-11-20 14:41:06.362268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.569 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.569 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:59.569 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Zsr9vr0dJz 00:20:59.569 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:59.828 [2024-11-20 14:41:06.732499] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.828 nvme0n1 00:20:59.828 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.087 Running I/O for 1 seconds... 00:21:01.026 3743.00 IOPS, 14.62 MiB/s 00:21:01.026 Latency(us) 00:21:01.026 [2024-11-20T13:41:08.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.026 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.026 Verification LBA range: start 0x0 length 0x2000 00:21:01.026 nvme0n1 : 1.06 3652.72 14.27 0.00 0.00 34123.88 5925.55 56797.87 00:21:01.026 [2024-11-20T13:41:08.086Z] =================================================================================================================== 00:21:01.026 [2024-11-20T13:41:08.086Z] Total : 3652.72 14.27 0.00 0.00 34123.88 5925.55 56797.87 00:21:01.026 { 00:21:01.026 "results": [ 00:21:01.026 { 00:21:01.026 "job": "nvme0n1", 00:21:01.026 "core_mask": "0x2", 00:21:01.026 "workload": "verify", 00:21:01.026 "status": "finished", 00:21:01.026 "verify_range": { 00:21:01.026 "start": 0, 00:21:01.026 "length": 8192 00:21:01.026 }, 00:21:01.026 "queue_depth": 128, 00:21:01.026 "io_size": 4096, 00:21:01.026 "runtime": 1.059757, 00:21:01.026 "iops": 3652.724162237192, 00:21:01.026 "mibps": 14.268453758739032, 00:21:01.026 "io_failed": 0, 00:21:01.026 "io_timeout": 0, 00:21:01.026 "avg_latency_us": 34123.87722724533, 00:21:01.026 "min_latency_us": 5925.546666666667, 00:21:01.026 "max_latency_us": 56797.86666666667 00:21:01.026 } 00:21:01.026 ], 00:21:01.026 "core_count": 1 00:21:01.026 } 00:21:01.026 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3929267 00:21:01.026 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3929267 ']' 00:21:01.026 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3929267 00:21:01.026 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:01.026 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.027 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3929267 00:21:01.027 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:01.027 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:01.027 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3929267' 00:21:01.027 killing process with pid 3929267 00:21:01.027 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3929267 00:21:01.027 Received shutdown signal, test time was about 1.000000 seconds 00:21:01.027 00:21:01.027 Latency(us) 00:21:01.027 [2024-11-20T13:41:08.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.027 [2024-11-20T13:41:08.087Z] =================================================================================================================== 00:21:01.027 [2024-11-20T13:41:08.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.027 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3929267 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3928908 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3928908 ']' 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3928908 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3928908 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3928908' 00:21:01.286 killing process with pid 3928908 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3928908 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3928908 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3929710 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3929710 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3929710 ']' 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.286 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:01.286 [2024-11-20 14:41:08.326387] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:21:01.286 [2024-11-20 14:41:08.326443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.545 [2024-11-20 14:41:08.410760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.546 [2024-11-20 14:41:08.445536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.546 [2024-11-20 14:41:08.445571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.546 [2024-11-20 14:41:08.445580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.546 [2024-11-20 14:41:08.445586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.546 [2024-11-20 14:41:08.445592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.546 [2024-11-20 14:41:08.446195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.114 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.114 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:02.114 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:02.114 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:02.114 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.114 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.114 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:02.114 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.114 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.114 [2024-11-20 14:41:09.136065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.114 malloc0 00:21:02.114 [2024-11-20 14:41:09.166284] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.114 [2024-11-20 14:41:09.166631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3929969 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3929969 /var/tmp/bdevperf.sock 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3929969 ']' 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:02.373 [2024-11-20 14:41:09.233051] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:21:02.373 [2024-11-20 14:41:09.233117] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929969 ] 00:21:02.373 [2024-11-20 14:41:09.302568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.373 [2024-11-20 14:41:09.339718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:02.373 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Zsr9vr0dJz 00:21:02.632 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:02.890 [2024-11-20 14:41:09.711551] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.890 nvme0n1 00:21:02.890 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:02.890 Running I/O for 1 seconds... 00:21:04.084 3295.00 IOPS, 12.87 MiB/s 00:21:04.084 Latency(us) 00:21:04.084 [2024-11-20T13:41:11.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.084 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:04.084 Verification LBA range: start 0x0 length 0x2000 00:21:04.084 nvme0n1 : 1.06 3222.78 12.59 0.00 0.00 38648.43 6280.53 85196.80 00:21:04.084 [2024-11-20T13:41:11.144Z] =================================================================================================================== 00:21:04.084 [2024-11-20T13:41:11.144Z] Total : 3222.78 12.59 0.00 0.00 38648.43 6280.53 85196.80 00:21:04.084 { 00:21:04.084 "results": [ 00:21:04.084 { 00:21:04.084 "job": "nvme0n1", 00:21:04.084 "core_mask": "0x2", 00:21:04.084 "workload": "verify", 00:21:04.084 "status": "finished", 00:21:04.084 "verify_range": { 00:21:04.084 "start": 0, 00:21:04.084 "length": 8192 00:21:04.084 }, 00:21:04.084 "queue_depth": 128, 00:21:04.084 "io_size": 4096, 00:21:04.084 "runtime": 1.062128, 00:21:04.084 "iops": 3222.775409366856, 00:21:04.084 "mibps": 12.588966442839281, 00:21:04.084 "io_failed": 0, 00:21:04.084 "io_timeout": 0, 00:21:04.084 "avg_latency_us": 38648.42644853442, 00:21:04.084 "min_latency_us": 6280.533333333334, 00:21:04.084 "max_latency_us": 85196.8 00:21:04.084 } 00:21:04.084 ], 00:21:04.084 "core_count": 1 00:21:04.084 } 00:21:04.084 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:04.084 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.084 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.084 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.084 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:04.084 "subsystems": [ 00:21:04.084 { 00:21:04.084 "subsystem": "keyring", 00:21:04.084 "config": [ 00:21:04.084 { 00:21:04.084 "method": "keyring_file_add_key", 00:21:04.084 "params": { 00:21:04.084 "name": "key0", 00:21:04.084 "path": "/tmp/tmp.Zsr9vr0dJz" 00:21:04.084 } 00:21:04.084 } 00:21:04.084 ] 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "subsystem": "iobuf", 00:21:04.084 "config": [ 00:21:04.084 { 00:21:04.084 "method": "iobuf_set_options", 00:21:04.084 "params": { 00:21:04.084 "small_pool_count": 8192, 00:21:04.084 "large_pool_count": 1024, 00:21:04.084 "small_bufsize": 8192, 00:21:04.084 "large_bufsize": 135168, 00:21:04.084 "enable_numa": false 00:21:04.084 } 00:21:04.084 } 00:21:04.084 ] 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "subsystem": "sock", 00:21:04.084 "config": [ 00:21:04.084 { 00:21:04.084 "method": "sock_set_default_impl", 00:21:04.084 "params": { 00:21:04.084 "impl_name": "posix" 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "sock_impl_set_options", 00:21:04.084 "params": { 00:21:04.084 "impl_name": "ssl", 00:21:04.084 "recv_buf_size": 4096, 00:21:04.084 "send_buf_size": 4096, 00:21:04.084 "enable_recv_pipe": true, 00:21:04.084 "enable_quickack": false, 00:21:04.084 "enable_placement_id": 0, 00:21:04.084 "enable_zerocopy_send_server": true, 00:21:04.084 "enable_zerocopy_send_client": false, 00:21:04.084 "zerocopy_threshold": 0, 00:21:04.084 "tls_version": 0, 00:21:04.084 "enable_ktls": false 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "sock_impl_set_options", 00:21:04.084 "params": { 00:21:04.084 "impl_name": "posix", 00:21:04.084 "recv_buf_size": 2097152, 00:21:04.084 "send_buf_size": 2097152, 00:21:04.084 "enable_recv_pipe": true, 00:21:04.084 "enable_quickack": false, 00:21:04.084 "enable_placement_id": 0, 00:21:04.084 "enable_zerocopy_send_server": true, 00:21:04.084 "enable_zerocopy_send_client": false, 00:21:04.084 "zerocopy_threshold": 0, 00:21:04.084 "tls_version": 0, 00:21:04.084 "enable_ktls": false 00:21:04.084 } 00:21:04.084 } 00:21:04.084 ] 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "subsystem": "vmd", 00:21:04.084 "config": [] 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "subsystem": "accel", 00:21:04.084 "config": [ 00:21:04.084 { 00:21:04.084 "method": "accel_set_options", 00:21:04.084 "params": { 00:21:04.084 "small_cache_size": 128, 00:21:04.084 "large_cache_size": 16, 00:21:04.084 "task_count": 2048, 00:21:04.084 "sequence_count": 2048, 00:21:04.084 "buf_count": 2048 00:21:04.084 } 00:21:04.084 } 00:21:04.084 ] 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "subsystem": "bdev", 00:21:04.084 "config": [ 00:21:04.084 { 00:21:04.084 "method": "bdev_set_options", 00:21:04.084 "params": { 00:21:04.084 "bdev_io_pool_size": 65535, 00:21:04.084 "bdev_io_cache_size": 256, 00:21:04.084 "bdev_auto_examine": true, 00:21:04.084 "iobuf_small_cache_size": 128, 00:21:04.084 "iobuf_large_cache_size": 16 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "bdev_raid_set_options", 00:21:04.084 "params": { 00:21:04.084 "process_window_size_kb": 1024, 00:21:04.084 "process_max_bandwidth_mb_sec": 0 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "bdev_iscsi_set_options", 00:21:04.084 "params": { 00:21:04.084 "timeout_sec": 30 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "bdev_nvme_set_options", 00:21:04.084 "params": { 00:21:04.084 "action_on_timeout": "none", 00:21:04.084 "timeout_us": 0, 00:21:04.084 "timeout_admin_us": 0, 00:21:04.084 "keep_alive_timeout_ms": 10000, 00:21:04.084 "arbitration_burst": 0, 00:21:04.084 "low_priority_weight": 0, 00:21:04.084 "medium_priority_weight": 0, 00:21:04.084 "high_priority_weight": 0, 00:21:04.084 "nvme_adminq_poll_period_us": 10000, 00:21:04.084 "nvme_ioq_poll_period_us": 0, 00:21:04.084 "io_queue_requests": 0, 00:21:04.084 "delay_cmd_submit": true, 00:21:04.084 "transport_retry_count": 4, 00:21:04.084 "bdev_retry_count": 3, 00:21:04.084 "transport_ack_timeout": 0, 00:21:04.084 "ctrlr_loss_timeout_sec": 0, 00:21:04.084 "reconnect_delay_sec": 0, 00:21:04.084 "fast_io_fail_timeout_sec": 0, 00:21:04.084 "disable_auto_failback": false, 00:21:04.084 "generate_uuids": false, 00:21:04.084 "transport_tos": 0, 00:21:04.084 "nvme_error_stat": false, 00:21:04.084 "rdma_srq_size": 0, 00:21:04.084 "io_path_stat": false, 00:21:04.084 "allow_accel_sequence": false, 00:21:04.084 "rdma_max_cq_size": 0, 00:21:04.084 "rdma_cm_event_timeout_ms": 0, 00:21:04.084 "dhchap_digests": [ 00:21:04.084 "sha256", 00:21:04.084 "sha384", 00:21:04.084 "sha512" 00:21:04.084 ], 00:21:04.084 "dhchap_dhgroups": [ 00:21:04.084 "null", 00:21:04.084 "ffdhe2048", 00:21:04.084 "ffdhe3072", 00:21:04.084 "ffdhe4096", 00:21:04.084 "ffdhe6144", 00:21:04.084 "ffdhe8192" 00:21:04.084 ] 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "bdev_nvme_set_hotplug", 00:21:04.084 "params": { 00:21:04.084 "period_us": 100000, 00:21:04.084 "enable": false 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "bdev_malloc_create", 00:21:04.084 "params": { 00:21:04.084 "name": "malloc0", 00:21:04.084 "num_blocks": 8192, 00:21:04.084 "block_size": 4096, 00:21:04.084 "physical_block_size": 4096, 00:21:04.084 "uuid": "08c7827a-8383-4f8f-9ba1-13b926042fd7", 00:21:04.084 "optimal_io_boundary": 0, 00:21:04.084 "md_size": 0, 00:21:04.084 "dif_type": 0, 00:21:04.084 "dif_is_head_of_md": false, 00:21:04.084 "dif_pi_format": 0 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "bdev_wait_for_examine" 00:21:04.084 } 00:21:04.084 ] 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "subsystem": "nbd", 00:21:04.084 "config": [] 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "subsystem": "scheduler", 00:21:04.084 "config": [ 00:21:04.084 { 00:21:04.084 "method": "framework_set_scheduler", 00:21:04.084 "params": { 00:21:04.084 "name": "static" 00:21:04.084 } 00:21:04.084 } 00:21:04.084 ] 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "subsystem": "nvmf", 00:21:04.084 "config": [ 00:21:04.084 { 00:21:04.084 "method": "nvmf_set_config", 00:21:04.084 "params": { 00:21:04.084 "discovery_filter": "match_any", 00:21:04.084 "admin_cmd_passthru": { 00:21:04.084 "identify_ctrlr": false 00:21:04.084 }, 00:21:04.084 "dhchap_digests": [ 00:21:04.084 "sha256", 00:21:04.084 "sha384", 00:21:04.084 "sha512" 00:21:04.084 ], 00:21:04.084 "dhchap_dhgroups": [ 00:21:04.084 "null", 00:21:04.084 "ffdhe2048", 00:21:04.084 "ffdhe3072", 00:21:04.084 "ffdhe4096", 00:21:04.084 "ffdhe6144", 00:21:04.084 "ffdhe8192" 00:21:04.084 ] 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "nvmf_set_max_subsystems", 00:21:04.084 "params": { 00:21:04.084 "max_subsystems": 1024 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "nvmf_set_crdt", 00:21:04.084 "params": { 00:21:04.084 "crdt1": 0, 00:21:04.084 "crdt2": 0, 00:21:04.084 "crdt3": 0 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "nvmf_create_transport", 00:21:04.084 "params": { 00:21:04.084 "trtype": "TCP", 00:21:04.084 "max_queue_depth": 128, 00:21:04.084 "max_io_qpairs_per_ctrlr": 127, 00:21:04.084 "in_capsule_data_size": 4096, 00:21:04.084 "max_io_size": 131072, 00:21:04.084 "io_unit_size": 131072, 00:21:04.084 "max_aq_depth": 128, 00:21:04.084 "num_shared_buffers": 511, 00:21:04.084 "buf_cache_size": 4294967295, 00:21:04.084 "dif_insert_or_strip": false, 00:21:04.084 "zcopy": false, 00:21:04.084 "c2h_success": false, 00:21:04.084 "sock_priority": 0, 00:21:04.084 "abort_timeout_sec": 1, 00:21:04.084 "ack_timeout": 0, 00:21:04.084 "data_wr_pool_size": 0 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "nvmf_create_subsystem", 00:21:04.084 "params": { 00:21:04.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.084 "allow_any_host": false, 00:21:04.084 "serial_number": "00000000000000000000", 00:21:04.084 "model_number": "SPDK bdev Controller", 00:21:04.084 "max_namespaces": 32, 00:21:04.084 "min_cntlid": 1, 00:21:04.084 "max_cntlid": 65519, 00:21:04.084 "ana_reporting": false 00:21:04.084 } 00:21:04.084 }, 00:21:04.084 { 00:21:04.084 "method": "nvmf_subsystem_add_host", 00:21:04.085 "params": { 00:21:04.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.085 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.085 "psk": "key0" 00:21:04.085 } 00:21:04.085 }, 00:21:04.085 { 00:21:04.085 "method": "nvmf_subsystem_add_ns", 00:21:04.085 "params": { 00:21:04.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.085 "namespace": { 00:21:04.085 "nsid": 1, 00:21:04.085 "bdev_name": "malloc0", 00:21:04.085 "nguid": "08C7827A83834F8F9BA113B926042FD7", 00:21:04.085 "uuid": "08c7827a-8383-4f8f-9ba1-13b926042fd7", 00:21:04.085 "no_auto_visible": false 00:21:04.085 } 00:21:04.085 } 00:21:04.085 }, 00:21:04.085 { 00:21:04.085 "method": "nvmf_subsystem_add_listener", 00:21:04.085 "params": { 00:21:04.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.085 "listen_address": { 00:21:04.085 "trtype": "TCP", 00:21:04.085 "adrfam": "IPv4", 00:21:04.085 "traddr": "10.0.0.2", 00:21:04.085 "trsvcid": "4420" 00:21:04.085 }, 00:21:04.085 "secure_channel": false, 00:21:04.085 "sock_impl": "ssl" 00:21:04.085 } 00:21:04.085 } 00:21:04.085 ] 00:21:04.085 } 00:21:04.085 ] 00:21:04.085 }' 00:21:04.085 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:04.344 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:04.344 "subsystems": [ 00:21:04.344 { 00:21:04.344 "subsystem": "keyring", 00:21:04.344 "config": [ 00:21:04.344 { 00:21:04.344 "method": "keyring_file_add_key", 00:21:04.344 "params": { 00:21:04.344 "name": "key0", 00:21:04.344 "path": "/tmp/tmp.Zsr9vr0dJz" 00:21:04.344 } 00:21:04.344 } 00:21:04.344 ] 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "subsystem": "iobuf", 00:21:04.344 "config": [ 00:21:04.344 { 00:21:04.344 "method": "iobuf_set_options", 00:21:04.344 "params": { 00:21:04.344 "small_pool_count": 8192, 00:21:04.344 "large_pool_count": 1024, 00:21:04.344 "small_bufsize": 8192, 00:21:04.344 "large_bufsize": 135168, 00:21:04.344 "enable_numa": false 00:21:04.344 } 00:21:04.344 } 00:21:04.344 ] 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "subsystem": "sock", 00:21:04.344 "config": [ 00:21:04.344 { 00:21:04.344 "method": "sock_set_default_impl", 00:21:04.344 "params": { 00:21:04.344 "impl_name": "posix" 00:21:04.344 } 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "method": "sock_impl_set_options", 00:21:04.344 "params": { 00:21:04.344 "impl_name": "ssl", 00:21:04.344 "recv_buf_size": 4096, 00:21:04.344 "send_buf_size": 4096, 00:21:04.344 "enable_recv_pipe": true, 00:21:04.344 "enable_quickack": false, 00:21:04.344 "enable_placement_id": 0, 00:21:04.344 "enable_zerocopy_send_server": true, 00:21:04.344 "enable_zerocopy_send_client": false, 00:21:04.344 "zerocopy_threshold": 0, 00:21:04.344 "tls_version": 0, 00:21:04.344 "enable_ktls": false 00:21:04.344 } 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "method": "sock_impl_set_options", 00:21:04.344 "params": { 00:21:04.344 "impl_name": "posix", 00:21:04.344 "recv_buf_size": 2097152, 00:21:04.344 "send_buf_size": 2097152, 00:21:04.344 "enable_recv_pipe": true, 00:21:04.344 "enable_quickack": false, 00:21:04.344 "enable_placement_id": 0, 00:21:04.344 "enable_zerocopy_send_server": true, 00:21:04.344 "enable_zerocopy_send_client": false, 00:21:04.344 "zerocopy_threshold": 0, 00:21:04.344 "tls_version": 0, 00:21:04.344 "enable_ktls": false 00:21:04.344 } 00:21:04.344 } 00:21:04.344 ] 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "subsystem": "vmd", 00:21:04.344 "config": [] 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "subsystem": "accel", 00:21:04.344 "config": [ 00:21:04.344 { 00:21:04.344 "method": "accel_set_options", 00:21:04.344 "params": { 00:21:04.344 "small_cache_size": 128, 00:21:04.344 "large_cache_size": 16, 00:21:04.344 "task_count": 2048, 00:21:04.344 "sequence_count": 2048, 00:21:04.344 "buf_count": 2048 00:21:04.344 } 00:21:04.344 } 00:21:04.344 ] 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "subsystem": "bdev", 00:21:04.344 "config": [ 00:21:04.344 { 00:21:04.344 "method": "bdev_set_options", 00:21:04.344 "params": { 00:21:04.344 "bdev_io_pool_size": 65535, 00:21:04.344 "bdev_io_cache_size": 256, 00:21:04.344 "bdev_auto_examine": true, 00:21:04.344 "iobuf_small_cache_size": 128, 00:21:04.344 "iobuf_large_cache_size": 16 00:21:04.344 } 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "method": "bdev_raid_set_options", 00:21:04.344 "params": { 00:21:04.344 "process_window_size_kb": 1024, 00:21:04.344 "process_max_bandwidth_mb_sec": 0 00:21:04.344 } 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "method": "bdev_iscsi_set_options", 00:21:04.344 "params": { 00:21:04.344 "timeout_sec": 30 00:21:04.344 } 00:21:04.344 }, 00:21:04.344 { 00:21:04.344 "method": "bdev_nvme_set_options", 00:21:04.344 "params": { 00:21:04.344 "action_on_timeout": "none", 00:21:04.344 "timeout_us": 0, 00:21:04.344 "timeout_admin_us": 0, 00:21:04.344 "keep_alive_timeout_ms": 10000, 00:21:04.344 "arbitration_burst": 0, 00:21:04.344 "low_priority_weight": 0, 00:21:04.344 "medium_priority_weight": 0, 00:21:04.344 "high_priority_weight": 0, 00:21:04.344 "nvme_adminq_poll_period_us": 10000, 00:21:04.344 "nvme_ioq_poll_period_us": 0, 00:21:04.344 "io_queue_requests": 512, 00:21:04.344 "delay_cmd_submit": true, 00:21:04.344 "transport_retry_count": 4, 00:21:04.344 "bdev_retry_count": 3, 00:21:04.344 "transport_ack_timeout": 0, 00:21:04.344 "ctrlr_loss_timeout_sec": 0, 00:21:04.344 "reconnect_delay_sec": 0, 00:21:04.344 "fast_io_fail_timeout_sec": 0, 00:21:04.344 "disable_auto_failback": false, 00:21:04.344 "generate_uuids": false, 00:21:04.344 "transport_tos": 0, 00:21:04.344 "nvme_error_stat": false, 00:21:04.344 "rdma_srq_size": 0, 00:21:04.344 "io_path_stat": false, 00:21:04.344 "allow_accel_sequence": false, 00:21:04.344 "rdma_max_cq_size": 0, 00:21:04.344 "rdma_cm_event_timeout_ms": 0, 00:21:04.344 "dhchap_digests": [ 00:21:04.344 "sha256", 00:21:04.344 "sha384", 00:21:04.344 "sha512" 00:21:04.344 ], 00:21:04.344 "dhchap_dhgroups": [ 00:21:04.344 "null", 00:21:04.344 "ffdhe2048", 00:21:04.344 "ffdhe3072", 00:21:04.344 "ffdhe4096", 00:21:04.344 "ffdhe6144", 00:21:04.344 "ffdhe8192" 00:21:04.344 ] 00:21:04.344 } 00:21:04.344 }, 00:21:04.345 { 00:21:04.345 "method": "bdev_nvme_attach_controller", 00:21:04.345 "params": { 00:21:04.345 "name": "nvme0", 00:21:04.345 "trtype": "TCP", 00:21:04.345 "adrfam": "IPv4", 00:21:04.345 "traddr": "10.0.0.2", 00:21:04.345 "trsvcid": "4420", 00:21:04.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.345 "prchk_reftag": false, 00:21:04.345 "prchk_guard": false, 00:21:04.345 "ctrlr_loss_timeout_sec": 0, 00:21:04.345 "reconnect_delay_sec": 0, 00:21:04.345 "fast_io_fail_timeout_sec": 0, 00:21:04.345 "psk": "key0", 00:21:04.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.345 "hdgst": false, 00:21:04.345 "ddgst": false, 00:21:04.345 "multipath": "multipath" 00:21:04.345 } 00:21:04.345 }, 00:21:04.345 { 00:21:04.345 "method": "bdev_nvme_set_hotplug", 00:21:04.345 "params": { 00:21:04.345 "period_us": 100000, 00:21:04.345 "enable": false 00:21:04.345 } 00:21:04.345 }, 00:21:04.345 { 00:21:04.345 "method": "bdev_enable_histogram", 00:21:04.345 "params": { 00:21:04.345 "name": "nvme0n1", 00:21:04.345 "enable": true 00:21:04.345 } 00:21:04.345 }, 00:21:04.345 { 00:21:04.345 "method": "bdev_wait_for_examine" 00:21:04.345 } 00:21:04.345 ] 00:21:04.345 }, 00:21:04.345 { 00:21:04.345 "subsystem": "nbd", 00:21:04.345 "config": [] 00:21:04.345 } 00:21:04.345 ] 00:21:04.345 }' 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3929969 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3929969 ']' 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3929969 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3929969 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3929969' 00:21:04.345 killing process with pid 3929969 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3929969 00:21:04.345 Received shutdown signal, test time was about 1.000000 seconds 00:21:04.345 00:21:04.345 Latency(us) 00:21:04.345 [2024-11-20T13:41:11.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.345 [2024-11-20T13:41:11.405Z] =================================================================================================================== 00:21:04.345 [2024-11-20T13:41:11.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3929969 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3929710 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3929710 ']' 00:21:04.345 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3929710 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3929710 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3929710' 00:21:04.604 killing process with pid 3929710 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3929710 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3929710 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.604 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:04.604 "subsystems": [ 00:21:04.604 { 00:21:04.604 "subsystem": "keyring", 00:21:04.604 "config": [ 00:21:04.604 { 00:21:04.604 "method": "keyring_file_add_key", 00:21:04.604 "params": { 00:21:04.604 "name": "key0", 00:21:04.604 "path": "/tmp/tmp.Zsr9vr0dJz" 00:21:04.604 } 00:21:04.604 } 00:21:04.604 ] 00:21:04.604 }, 00:21:04.604 { 00:21:04.604 "subsystem": "iobuf", 00:21:04.604 "config": [ 00:21:04.604 { 00:21:04.604 "method": "iobuf_set_options", 00:21:04.604 "params": { 00:21:04.604 "small_pool_count": 8192, 00:21:04.604 "large_pool_count": 1024, 00:21:04.604 "small_bufsize": 8192, 00:21:04.604 "large_bufsize": 135168, 00:21:04.604 "enable_numa": false 00:21:04.604 } 00:21:04.604 } 00:21:04.604 ] 00:21:04.604 }, 00:21:04.604 { 00:21:04.604 "subsystem": "sock", 00:21:04.604 "config": [ 00:21:04.604 { 00:21:04.604 "method": "sock_set_default_impl", 00:21:04.604 "params": { 00:21:04.604 "impl_name": "posix" 00:21:04.604 } 00:21:04.604 }, 00:21:04.604 { 00:21:04.604 "method": "sock_impl_set_options", 00:21:04.604 "params": { 00:21:04.604 "impl_name": "ssl", 00:21:04.604 "recv_buf_size": 4096, 00:21:04.604 "send_buf_size": 4096, 00:21:04.604 "enable_recv_pipe": true, 00:21:04.605 "enable_quickack": false, 00:21:04.605 "enable_placement_id": 0, 00:21:04.605 "enable_zerocopy_send_server": true, 00:21:04.605 "enable_zerocopy_send_client": false, 00:21:04.605 "zerocopy_threshold": 0, 00:21:04.605 "tls_version": 0, 00:21:04.605 "enable_ktls": false 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "sock_impl_set_options", 00:21:04.605 "params": { 00:21:04.605 "impl_name": "posix", 00:21:04.605 "recv_buf_size": 2097152, 00:21:04.605 "send_buf_size": 2097152, 00:21:04.605 "enable_recv_pipe": true, 00:21:04.605 "enable_quickack": false, 00:21:04.605 "enable_placement_id": 0, 00:21:04.605 "enable_zerocopy_send_server": true, 00:21:04.605 "enable_zerocopy_send_client": false, 00:21:04.605 "zerocopy_threshold": 0, 00:21:04.605 "tls_version": 0, 00:21:04.605 "enable_ktls": false 00:21:04.605 } 00:21:04.605 } 00:21:04.605 ] 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "subsystem": "vmd", 00:21:04.605 "config": [] 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "subsystem": "accel", 00:21:04.605 "config": [ 00:21:04.605 { 00:21:04.605 "method": "accel_set_options", 00:21:04.605 "params": { 00:21:04.605 "small_cache_size": 128, 00:21:04.605 "large_cache_size": 16, 00:21:04.605 "task_count": 2048, 00:21:04.605 "sequence_count": 2048, 00:21:04.605 "buf_count": 2048 00:21:04.605 } 00:21:04.605 } 00:21:04.605 ] 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "subsystem": "bdev", 00:21:04.605 "config": [ 00:21:04.605 { 00:21:04.605 "method": "bdev_set_options", 00:21:04.605 "params": { 00:21:04.605 "bdev_io_pool_size": 65535, 00:21:04.605 "bdev_io_cache_size": 256, 00:21:04.605 "bdev_auto_examine": true, 00:21:04.605 "iobuf_small_cache_size": 128, 00:21:04.605 "iobuf_large_cache_size": 16 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "bdev_raid_set_options", 00:21:04.605 "params": { 00:21:04.605 "process_window_size_kb": 1024, 00:21:04.605 "process_max_bandwidth_mb_sec": 0 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "bdev_iscsi_set_options", 00:21:04.605 "params": { 00:21:04.605 "timeout_sec": 30 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "bdev_nvme_set_options", 00:21:04.605 "params": { 00:21:04.605 "action_on_timeout": "none", 00:21:04.605 "timeout_us": 0, 00:21:04.605 "timeout_admin_us": 0, 00:21:04.605 "keep_alive_timeout_ms": 10000, 00:21:04.605 "arbitration_burst": 0, 00:21:04.605 "low_priority_weight": 0, 00:21:04.605 "medium_priority_weight": 0, 00:21:04.605 "high_priority_weight": 0, 00:21:04.605 "nvme_adminq_poll_period_us": 10000, 00:21:04.605 "nvme_ioq_poll_period_us": 0, 00:21:04.605 "io_queue_requests": 0, 00:21:04.605 "delay_cmd_submit": true, 00:21:04.605 "transport_retry_count": 4, 00:21:04.605 "bdev_retry_count": 3, 00:21:04.605 "transport_ack_timeout": 0, 00:21:04.605 "ctrlr_loss_timeout_sec": 0, 00:21:04.605 "reconnect_delay_sec": 0, 00:21:04.605 "fast_io_fail_timeout_sec": 0, 00:21:04.605 "disable_auto_failback": false, 00:21:04.605 "generate_uuids": false, 00:21:04.605 "transport_tos": 0, 00:21:04.605 "nvme_error_stat": false, 00:21:04.605 "rdma_srq_size": 0, 00:21:04.605 "io_path_stat": false, 00:21:04.605 "allow_accel_sequence": false, 00:21:04.605 "rdma_max_cq_size": 0, 00:21:04.605 "rdma_cm_event_timeout_ms": 0, 00:21:04.605 "dhchap_digests": [ 00:21:04.605 "sha256", 00:21:04.605 "sha384", 00:21:04.605 "sha512" 00:21:04.605 ], 00:21:04.605 "dhchap_dhgroups": [ 00:21:04.605 "null", 00:21:04.605 "ffdhe2048", 00:21:04.605 "ffdhe3072", 00:21:04.605 "ffdhe4096", 00:21:04.605 "ffdhe6144", 00:21:04.605 "ffdhe8192" 00:21:04.605 ] 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "bdev_nvme_set_hotplug", 00:21:04.605 "params": { 00:21:04.605 "period_us": 100000, 00:21:04.605 "enable": false 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "bdev_malloc_create", 00:21:04.605 "params": { 00:21:04.605 "name": "malloc0", 00:21:04.605 "num_blocks": 8192, 00:21:04.605 "block_size": 4096, 00:21:04.605 "physical_block_size": 4096, 00:21:04.605 "uuid": "08c7827a-8383-4f8f-9ba1-13b926042fd7", 00:21:04.605 "optimal_io_boundary": 0, 00:21:04.605 "md_size": 0, 00:21:04.605 "dif_type": 0, 00:21:04.605 "dif_is_head_of_md": false, 00:21:04.605 "dif_pi_format": 0 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "bdev_wait_for_examine" 00:21:04.605 } 00:21:04.605 ] 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "subsystem": "nbd", 00:21:04.605 "config": [] 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "subsystem": "scheduler", 00:21:04.605 "config": [ 00:21:04.605 { 00:21:04.605 "method": "framework_set_scheduler", 00:21:04.605 "params": { 00:21:04.605 "name": "static" 00:21:04.605 } 00:21:04.605 } 00:21:04.605 ] 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "subsystem": "nvmf", 00:21:04.605 "config": [ 00:21:04.605 { 00:21:04.605 "method": "nvmf_set_config", 00:21:04.605 "params": { 00:21:04.605 "discovery_filter": "match_any", 00:21:04.605 "admin_cmd_passthru": { 00:21:04.605 "identify_ctrlr": false 00:21:04.605 }, 00:21:04.605 "dhchap_digests": [ 00:21:04.605 "sha256", 00:21:04.605 "sha384", 00:21:04.605 "sha512" 00:21:04.605 ], 00:21:04.605 "dhchap_dhgroups": [ 00:21:04.605 "null", 00:21:04.605 "ffdhe2048", 00:21:04.605 "ffdhe3072", 00:21:04.605 "ffdhe4096", 00:21:04.605 "ffdhe6144", 00:21:04.605 "ffdhe8192" 00:21:04.605 ] 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "nvmf_set_max_subsystems", 00:21:04.605 "params": { 00:21:04.605 "max_subsystems": 1024 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "nvmf_set_crdt", 00:21:04.605 "params": { 00:21:04.605 "crdt1": 0, 00:21:04.605 "crdt2": 0, 00:21:04.605 "crdt3": 0 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "nvmf_create_transport", 00:21:04.605 "params": { 00:21:04.605 "trtype": "TCP", 00:21:04.605 "max_queue_depth": 128, 00:21:04.605 "max_io_qpairs_per_ctrlr": 127, 00:21:04.605 "in_capsule_data_size": 4096, 00:21:04.605 "max_io_size": 131072, 00:21:04.605 "io_unit_size": 131072, 00:21:04.605 "max_aq_depth": 128, 00:21:04.605 "num_shared_buffers": 511, 00:21:04.605 "buf_cache_size": 4294967295, 00:21:04.605 "dif_insert_or_strip": false, 00:21:04.605 "zcopy": false, 00:21:04.605 "c2h_success": false, 00:21:04.605 "sock_priority": 0, 00:21:04.605 "abort_timeout_sec": 1, 00:21:04.605 "ack_timeout": 0, 00:21:04.605 "data_wr_pool_size": 0 00:21:04.605 } 00:21:04.605 }, 00:21:04.605 { 00:21:04.605 "method": "nvmf_create_subsystem", 00:21:04.605 "params": { 00:21:04.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.605 "allow_any_host": false, 00:21:04.605 "serial_number": "00000000000000000000", 00:21:04.605 "model_number": "SPDK bdev Controller", 00:21:04.605 "max_namespaces": 32, 00:21:04.605 "min_cntlid": 1, 00:21:04.606 "max_cntlid": 65519, 00:21:04.606 "ana_reporting": false 00:21:04.606 } 00:21:04.606 }, 00:21:04.606 { 00:21:04.606 "method": "nvmf_subsystem_add_host", 00:21:04.606 "params": { 00:21:04.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.606 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.606 "psk": "key0" 00:21:04.606 } 00:21:04.606 }, 00:21:04.606 { 00:21:04.606 "method": "nvmf_subsystem_add_ns", 00:21:04.606 "params": { 00:21:04.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.606 "namespace": { 00:21:04.606 "nsid": 1, 00:21:04.606 "bdev_name": "malloc0", 00:21:04.606 "nguid": "08C7827A83834F8F9BA113B926042FD7", 00:21:04.606 "uuid": "08c7827a-8383-4f8f-9ba1-13b926042fd7", 00:21:04.606 "no_auto_visible": false 00:21:04.606 } 00:21:04.606 } 00:21:04.606 }, 00:21:04.606 { 00:21:04.606 "method": "nvmf_subsystem_add_listener", 00:21:04.606 "params": { 00:21:04.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.606 "listen_address": { 00:21:04.606 "trtype": "TCP", 00:21:04.606 "adrfam": "IPv4", 00:21:04.606 "traddr": "10.0.0.2", 00:21:04.606 "trsvcid": "4420" 00:21:04.606 }, 00:21:04.606 "secure_channel": false, 00:21:04.606 "sock_impl": "ssl" 00:21:04.606 } 00:21:04.606 } 00:21:04.606 ] 00:21:04.606 } 00:21:04.606 ] 00:21:04.606 }' 00:21:04.606 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3930589 00:21:04.606 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3930589 00:21:04.606 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3930589 ']' 00:21:04.606 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:04.606 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.606 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.606 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.606 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.606 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.606 [2024-11-20 14:41:11.624756] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:21:04.606 [2024-11-20 14:41:11.624815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.865 [2024-11-20 14:41:11.708899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.865 [2024-11-20 14:41:11.747127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.865 [2024-11-20 14:41:11.747162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.865 [2024-11-20 14:41:11.747170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.865 [2024-11-20 14:41:11.747177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.865 [2024-11-20 14:41:11.747183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.865 [2024-11-20 14:41:11.747782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.123 [2024-11-20 14:41:11.948378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.123 [2024-11-20 14:41:11.980385] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.123 [2024-11-20 14:41:11.980620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3930674 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3930674 /var/tmp/bdevperf.sock 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3930674 ']' 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.382 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:05.382 "subsystems": [ 00:21:05.382 { 00:21:05.382 "subsystem": "keyring", 00:21:05.382 "config": [ 00:21:05.382 { 00:21:05.382 "method": "keyring_file_add_key", 00:21:05.382 "params": { 00:21:05.382 "name": "key0", 00:21:05.382 "path": "/tmp/tmp.Zsr9vr0dJz" 00:21:05.382 } 00:21:05.382 } 00:21:05.382 ] 00:21:05.382 }, 00:21:05.382 { 00:21:05.382 "subsystem": "iobuf", 00:21:05.382 "config": [ 00:21:05.382 { 00:21:05.382 "method": "iobuf_set_options", 00:21:05.382 "params": { 00:21:05.382 "small_pool_count": 8192, 00:21:05.382 "large_pool_count": 1024, 00:21:05.382 "small_bufsize": 8192, 00:21:05.382 "large_bufsize": 135168, 00:21:05.382 "enable_numa": false 00:21:05.382 } 00:21:05.382 } 00:21:05.382 ] 00:21:05.382 }, 00:21:05.382 { 00:21:05.382 "subsystem": "sock", 00:21:05.382 "config": [ 00:21:05.382 { 00:21:05.382 "method": "sock_set_default_impl", 00:21:05.382 "params": { 00:21:05.382 "impl_name": "posix" 00:21:05.383 } 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "method": "sock_impl_set_options", 00:21:05.383 "params": { 00:21:05.383 "impl_name": "ssl", 00:21:05.383 "recv_buf_size": 4096, 00:21:05.383 "send_buf_size": 4096, 00:21:05.383 "enable_recv_pipe": true, 00:21:05.383 "enable_quickack": false, 00:21:05.383 "enable_placement_id": 0, 00:21:05.383 "enable_zerocopy_send_server": true, 00:21:05.383 "enable_zerocopy_send_client": false, 00:21:05.383 "zerocopy_threshold": 0, 00:21:05.383 "tls_version": 0, 00:21:05.383 "enable_ktls": false 00:21:05.383 } 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "method": "sock_impl_set_options", 00:21:05.383 "params": { 00:21:05.383 "impl_name": "posix", 00:21:05.383 "recv_buf_size": 2097152, 00:21:05.383 "send_buf_size": 2097152, 00:21:05.383 "enable_recv_pipe": true, 00:21:05.383 "enable_quickack": false, 00:21:05.383 "enable_placement_id": 0, 00:21:05.383 "enable_zerocopy_send_server": true, 00:21:05.383 "enable_zerocopy_send_client": false, 00:21:05.383 "zerocopy_threshold": 0, 00:21:05.383 "tls_version": 0, 00:21:05.383 "enable_ktls": false 00:21:05.383 } 00:21:05.383 } 00:21:05.383 ] 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "subsystem": "vmd", 00:21:05.383 "config": [] 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "subsystem": "accel", 00:21:05.383 "config": [ 00:21:05.383 { 00:21:05.383 "method": "accel_set_options", 00:21:05.383 "params": { 00:21:05.383 "small_cache_size": 128, 00:21:05.383 "large_cache_size": 16, 00:21:05.383 "task_count": 2048, 00:21:05.383 "sequence_count": 2048, 00:21:05.383 "buf_count": 2048 00:21:05.383 } 00:21:05.383 } 00:21:05.383 ] 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "subsystem": "bdev", 00:21:05.383 "config": [ 00:21:05.383 { 00:21:05.383 "method": "bdev_set_options", 00:21:05.383 "params": { 00:21:05.383 "bdev_io_pool_size": 65535, 00:21:05.383 "bdev_io_cache_size": 256, 00:21:05.383 "bdev_auto_examine": true, 00:21:05.383 "iobuf_small_cache_size": 128, 00:21:05.383 "iobuf_large_cache_size": 16 00:21:05.383 } 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "method": "bdev_raid_set_options", 00:21:05.383 "params": { 00:21:05.383 "process_window_size_kb": 1024, 00:21:05.383 "process_max_bandwidth_mb_sec": 0 00:21:05.383 } 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "method": "bdev_iscsi_set_options", 00:21:05.383 "params": { 00:21:05.383 "timeout_sec": 30 00:21:05.383 } 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "method": "bdev_nvme_set_options", 00:21:05.383 "params": { 00:21:05.383 "action_on_timeout": "none", 00:21:05.383 "timeout_us": 0, 00:21:05.383 "timeout_admin_us": 0, 00:21:05.383 "keep_alive_timeout_ms": 10000, 00:21:05.383 "arbitration_burst": 0, 00:21:05.383 "low_priority_weight": 0, 00:21:05.383 "medium_priority_weight": 0, 00:21:05.383 "high_priority_weight": 0, 00:21:05.383 "nvme_adminq_poll_period_us": 10000, 00:21:05.383 "nvme_ioq_poll_period_us": 0, 00:21:05.383 "io_queue_requests": 512, 00:21:05.383 "delay_cmd_submit": true, 00:21:05.383 "transport_retry_count": 4, 00:21:05.383 "bdev_retry_count": 3, 00:21:05.383 "transport_ack_timeout": 0, 00:21:05.383 "ctrlr_loss_timeout_sec": 0, 00:21:05.383 "reconnect_delay_sec": 0, 00:21:05.383 "fast_io_fail_timeout_sec": 0, 00:21:05.383 "disable_auto_failback": false, 00:21:05.383 "generate_uuids": false, 00:21:05.383 "transport_tos": 0, 00:21:05.383 "nvme_error_stat": false, 00:21:05.383 "rdma_srq_size": 0, 00:21:05.383 "io_path_stat": false, 00:21:05.383 "allow_accel_sequence": false, 00:21:05.383 "rdma_max_cq_size": 0, 00:21:05.383 "rdma_cm_event_timeout_ms": 0, 00:21:05.383 "dhchap_digests": [ 00:21:05.383 "sha256", 00:21:05.383 "sha384", 00:21:05.383 "sha512" 00:21:05.383 ], 00:21:05.383 "dhchap_dhgroups": [ 00:21:05.383 "null", 00:21:05.383 "ffdhe2048", 00:21:05.383 "ffdhe3072", 00:21:05.383 "ffdhe4096", 00:21:05.383 "ffdhe6144", 00:21:05.383 "ffdhe8192" 00:21:05.383 ] 00:21:05.383 } 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "method": "bdev_nvme_attach_controller", 00:21:05.383 "params": { 00:21:05.383 "name": "nvme0", 00:21:05.383 "trtype": "TCP", 00:21:05.383 "adrfam": "IPv4", 00:21:05.383 "traddr": "10.0.0.2", 00:21:05.383 "trsvcid": "4420", 00:21:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.383 "prchk_reftag": false, 00:21:05.383 "prchk_guard": false, 00:21:05.383 "ctrlr_loss_timeout_sec": 0, 00:21:05.383 "reconnect_delay_sec": 0, 00:21:05.383 "fast_io_fail_timeout_sec": 0, 00:21:05.383 "psk": "key0", 00:21:05.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.383 "hdgst": false, 00:21:05.383 "ddgst": false, 00:21:05.383 "multipath": "multipath" 00:21:05.383 } 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "method": "bdev_nvme_set_hotplug", 00:21:05.383 "params": { 00:21:05.383 "period_us": 100000, 00:21:05.383 "enable": false 00:21:05.383 } 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "method": "bdev_enable_histogram", 00:21:05.383 "params": { 00:21:05.383 "name": "nvme0n1", 00:21:05.383 "enable": true 00:21:05.383 } 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "method": "bdev_wait_for_examine" 00:21:05.383 } 00:21:05.383 ] 00:21:05.383 }, 00:21:05.383 { 00:21:05.383 "subsystem": "nbd", 00:21:05.383 "config": [] 00:21:05.383 } 00:21:05.383 ] 00:21:05.383 }' 00:21:05.641 [2024-11-20 14:41:12.451124] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:21:05.641 [2024-11-20 14:41:12.451176] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930674 ] 00:21:05.641 [2024-11-20 14:41:12.514940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.641 [2024-11-20 14:41:12.544733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.641 [2024-11-20 14:41:12.680950] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.209 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.209 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:06.209 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:06.209 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:06.468 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.468 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:06.468 Running I/O for 1 seconds... 00:21:07.663 3812.00 IOPS, 14.89 MiB/s 00:21:07.663 Latency(us) 00:21:07.663 [2024-11-20T13:41:14.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.663 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:07.663 Verification LBA range: start 0x0 length 0x2000 00:21:07.663 nvme0n1 : 1.06 3719.17 14.53 0.00 0.00 33499.28 4587.52 81264.64 00:21:07.663 [2024-11-20T13:41:14.723Z] =================================================================================================================== 00:21:07.663 [2024-11-20T13:41:14.723Z] Total : 3719.17 14.53 0.00 0.00 33499.28 4587.52 81264.64 00:21:07.663 { 00:21:07.663 "results": [ 00:21:07.663 { 00:21:07.663 "job": "nvme0n1", 00:21:07.663 "core_mask": "0x2", 00:21:07.663 "workload": "verify", 00:21:07.663 "status": "finished", 00:21:07.663 "verify_range": { 00:21:07.663 "start": 0, 00:21:07.663 "length": 8192 00:21:07.663 }, 00:21:07.663 "queue_depth": 128, 00:21:07.663 "io_size": 4096, 00:21:07.663 "runtime": 1.059646, 00:21:07.663 "iops": 3719.166589596903, 00:21:07.664 "mibps": 14.527994490612903, 00:21:07.664 "io_failed": 0, 00:21:07.664 "io_timeout": 0, 00:21:07.664 "avg_latency_us": 33499.28306521187, 00:21:07.664 "min_latency_us": 4587.52, 00:21:07.664 "max_latency_us": 81264.64 00:21:07.664 } 00:21:07.664 ], 00:21:07.664 "core_count": 1 00:21:07.664 } 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:07.664 nvmf_trace.0 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3930674 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3930674 ']' 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3930674 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3930674 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3930674' 00:21:07.664 killing process with pid 3930674 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3930674 00:21:07.664 Received shutdown signal, test time was about 1.000000 seconds 00:21:07.664 00:21:07.664 Latency(us) 00:21:07.664 [2024-11-20T13:41:14.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.664 [2024-11-20T13:41:14.724Z] =================================================================================================================== 00:21:07.664 [2024-11-20T13:41:14.724Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.664 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3930674 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.924 rmmod nvme_tcp 00:21:07.924 rmmod nvme_fabrics 00:21:07.924 rmmod nvme_keyring 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3930589 ']' 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3930589 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3930589 ']' 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3930589 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3930589 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3930589' 00:21:07.924 killing process with pid 3930589 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3930589 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3930589 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.924 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.925 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.925 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.925 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.925 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.925 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.462 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.462 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.OR89yjwgAQ /tmp/tmp.SWJ3bV01aC /tmp/tmp.Zsr9vr0dJz 00:21:10.462 00:21:10.462 real 1m14.845s 00:21:10.462 user 1m59.444s 00:21:10.463 sys 0m22.535s 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.463 ************************************ 00:21:10.463 END TEST nvmf_tls 00:21:10.463 ************************************ 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.463 ************************************ 00:21:10.463 START TEST nvmf_fips 00:21:10.463 ************************************ 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:10.463 * Looking for test storage... 00:21:10.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:10.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.463 --rc genhtml_branch_coverage=1 00:21:10.463 --rc genhtml_function_coverage=1 00:21:10.463 --rc genhtml_legend=1 00:21:10.463 --rc geninfo_all_blocks=1 00:21:10.463 --rc geninfo_unexecuted_blocks=1 00:21:10.463 00:21:10.463 ' 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:10.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.463 --rc genhtml_branch_coverage=1 00:21:10.463 --rc genhtml_function_coverage=1 00:21:10.463 --rc genhtml_legend=1 00:21:10.463 --rc geninfo_all_blocks=1 00:21:10.463 --rc geninfo_unexecuted_blocks=1 00:21:10.463 00:21:10.463 ' 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:10.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.463 --rc genhtml_branch_coverage=1 00:21:10.463 --rc genhtml_function_coverage=1 00:21:10.463 --rc genhtml_legend=1 00:21:10.463 --rc geninfo_all_blocks=1 00:21:10.463 --rc geninfo_unexecuted_blocks=1 00:21:10.463 00:21:10.463 ' 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:10.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.463 --rc genhtml_branch_coverage=1 00:21:10.463 --rc genhtml_function_coverage=1 00:21:10.463 --rc genhtml_legend=1 00:21:10.463 --rc geninfo_all_blocks=1 00:21:10.463 --rc geninfo_unexecuted_blocks=1 00:21:10.463 00:21:10.463 ' 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.463 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:10.464 Error setting digest 00:21:10.464 40823ABF8F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:10.464 40823ABF8F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.464 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:15.746 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.746 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:15.746 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:15.747 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:15.747 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:15.747 Found net devices under 0000:31:00.0: cvl_0_0 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:15.747 Found net devices under 0000:31:00.1: cvl_0_1 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:15.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:21:15.747 00:21:15.747 --- 10.0.0.2 ping statistics --- 00:21:15.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.747 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:21:15.747 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:21:15.747 00:21:15.747 --- 10.0.0.1 ping statistics --- 00:21:15.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.747 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3935709 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3935709 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3935709 ']' 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:15.748 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:15.748 [2024-11-20 14:41:22.805207] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:21:15.748 [2024-11-20 14:41:22.805294] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.008 [2024-11-20 14:41:22.895502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.008 [2024-11-20 14:41:22.945491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.008 [2024-11-20 14:41:22.945541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.008 [2024-11-20 14:41:22.945549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.008 [2024-11-20 14:41:22.945557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.008 [2024-11-20 14:41:22.945568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.008 [2024-11-20 14:41:22.946360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.LN8 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.LN8 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.LN8 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.LN8 00:21:16.578 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:16.838 [2024-11-20 14:41:23.781465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.838 [2024-11-20 14:41:23.797454] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.838 [2024-11-20 14:41:23.797767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.838 malloc0 00:21:16.838 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.838 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3935900 00:21:16.838 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3935900 /var/tmp/bdevperf.sock 00:21:16.838 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3935900 ']' 00:21:16.838 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.839 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.839 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.839 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.839 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.839 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:17.098 [2024-11-20 14:41:23.912161] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:21:17.098 [2024-11-20 14:41:23.912238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935900 ] 00:21:17.098 [2024-11-20 14:41:23.996534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.098 [2024-11-20 14:41:24.047673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.667 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.667 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:17.667 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.LN8 00:21:17.927 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:18.186 [2024-11-20 14:41:24.994282] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.186 TLSTESTn1 00:21:18.186 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.186 Running I/O for 10 seconds... 00:21:20.510 5094.00 IOPS, 19.90 MiB/s [2024-11-20T13:41:28.510Z] 4737.00 IOPS, 18.50 MiB/s [2024-11-20T13:41:29.448Z] 4639.00 IOPS, 18.12 MiB/s [2024-11-20T13:41:30.386Z] 4782.00 IOPS, 18.68 MiB/s [2024-11-20T13:41:31.323Z] 4774.00 IOPS, 18.65 MiB/s [2024-11-20T13:41:32.262Z] 4745.67 IOPS, 18.54 MiB/s [2024-11-20T13:41:33.199Z] 4694.00 IOPS, 18.34 MiB/s [2024-11-20T13:41:34.578Z] 4728.88 IOPS, 18.47 MiB/s [2024-11-20T13:41:35.516Z] 4699.56 IOPS, 18.36 MiB/s [2024-11-20T13:41:35.516Z] 4677.20 IOPS, 18.27 MiB/s 00:21:28.456 Latency(us) 00:21:28.456 [2024-11-20T13:41:35.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.456 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:28.456 Verification LBA range: start 0x0 length 0x2000 00:21:28.456 TLSTESTn1 : 10.05 4664.41 18.22 0.00 0.00 27357.67 5925.55 53302.61 00:21:28.456 [2024-11-20T13:41:35.516Z] =================================================================================================================== 00:21:28.456 [2024-11-20T13:41:35.516Z] Total : 4664.41 18.22 0.00 0.00 27357.67 5925.55 53302.61 00:21:28.456 { 00:21:28.456 "results": [ 00:21:28.456 { 00:21:28.456 "job": "TLSTESTn1", 00:21:28.456 "core_mask": "0x4", 00:21:28.456 "workload": "verify", 00:21:28.456 "status": "finished", 00:21:28.456 "verify_range": { 00:21:28.456 "start": 0, 00:21:28.456 "length": 8192 00:21:28.456 }, 00:21:28.456 "queue_depth": 128, 00:21:28.456 "io_size": 4096, 00:21:28.456 "runtime": 10.054642, 00:21:28.456 "iops": 4664.412716037031, 00:21:28.456 "mibps": 18.22036217201965, 00:21:28.456 "io_failed": 0, 00:21:28.456 "io_timeout": 0, 00:21:28.456 "avg_latency_us": 27357.670086497932, 00:21:28.456 "min_latency_us": 5925.546666666667, 00:21:28.456 "max_latency_us": 53302.613333333335 00:21:28.456 } 00:21:28.456 ], 00:21:28.456 "core_count": 1 00:21:28.456 } 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:28.456 nvmf_trace.0 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3935900 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3935900 ']' 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3935900 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3935900 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3935900' 00:21:28.456 killing process with pid 3935900 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3935900 00:21:28.456 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.456 00:21:28.456 Latency(us) 00:21:28.456 [2024-11-20T13:41:35.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.456 [2024-11-20T13:41:35.516Z] =================================================================================================================== 00:21:28.456 [2024-11-20T13:41:35.516Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3935900 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:28.456 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:28.456 rmmod nvme_tcp 00:21:28.456 rmmod nvme_fabrics 00:21:28.456 rmmod nvme_keyring 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3935709 ']' 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3935709 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3935709 ']' 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3935709 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3935709 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3935709' 00:21:28.716 killing process with pid 3935709 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3935709 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3935709 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.716 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.LN8 00:21:31.252 00:21:31.252 real 0m20.670s 00:21:31.252 user 0m23.973s 00:21:31.252 sys 0m7.566s 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:31.252 ************************************ 00:21:31.252 END TEST nvmf_fips 00:21:31.252 ************************************ 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:31.252 ************************************ 00:21:31.252 START TEST nvmf_control_msg_list 00:21:31.252 ************************************ 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:31.252 * Looking for test storage... 00:21:31.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.252 --rc genhtml_branch_coverage=1 00:21:31.252 --rc genhtml_function_coverage=1 00:21:31.252 --rc genhtml_legend=1 00:21:31.252 --rc geninfo_all_blocks=1 00:21:31.252 --rc geninfo_unexecuted_blocks=1 00:21:31.252 00:21:31.252 ' 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.252 --rc genhtml_branch_coverage=1 00:21:31.252 --rc genhtml_function_coverage=1 00:21:31.252 --rc genhtml_legend=1 00:21:31.252 --rc geninfo_all_blocks=1 00:21:31.252 --rc geninfo_unexecuted_blocks=1 00:21:31.252 00:21:31.252 ' 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.252 --rc genhtml_branch_coverage=1 00:21:31.252 --rc genhtml_function_coverage=1 00:21:31.252 --rc genhtml_legend=1 00:21:31.252 --rc geninfo_all_blocks=1 00:21:31.252 --rc geninfo_unexecuted_blocks=1 00:21:31.252 00:21:31.252 ' 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.252 --rc genhtml_branch_coverage=1 00:21:31.252 --rc genhtml_function_coverage=1 00:21:31.252 --rc genhtml_legend=1 00:21:31.252 --rc geninfo_all_blocks=1 00:21:31.252 --rc geninfo_unexecuted_blocks=1 00:21:31.252 00:21:31.252 ' 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.252 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.253 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:36.524 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:36.524 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.524 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:36.525 Found net devices under 0000:31:00.0: cvl_0_0 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:36.525 Found net devices under 0000:31:00.1: cvl_0_1 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:36.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:21:36.525 00:21:36.525 --- 10.0.0.2 ping statistics --- 00:21:36.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.525 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:21:36.525 00:21:36.525 --- 10.0.0.1 ping statistics --- 00:21:36.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.525 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3942750 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3942750 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3942750 ']' 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.525 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:36.525 [2024-11-20 14:41:43.316698] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:21:36.525 [2024-11-20 14:41:43.316737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.525 [2024-11-20 14:41:43.381163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.525 [2024-11-20 14:41:43.410172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.525 [2024-11-20 14:41:43.410198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.526 [2024-11-20 14:41:43.410205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.526 [2024-11-20 14:41:43.410210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.526 [2024-11-20 14:41:43.410214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.526 [2024-11-20 14:41:43.410720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.526 [2024-11-20 14:41:43.514321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.526 Malloc0 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.526 [2024-11-20 14:41:43.548416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3942796 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3942797 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3942798 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3942796 00:21:36.526 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.786 [2024-11-20 14:41:43.606793] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:36.786 [2024-11-20 14:41:43.606924] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:36.786 [2024-11-20 14:41:43.616807] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:37.723 Initializing NVMe Controllers 00:21:37.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:37.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:37.723 Initialization complete. Launching workers. 00:21:37.723 ======================================================== 00:21:37.723 Latency(us) 00:21:37.723 Device Information : IOPS MiB/s Average min max 00:21:37.723 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1614.00 6.30 619.42 274.41 816.11 00:21:37.723 ======================================================== 00:21:37.723 Total : 1614.00 6.30 619.42 274.41 816.11 00:21:37.723 00:21:37.723 Initializing NVMe Controllers 00:21:37.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:37.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:37.723 Initialization complete. Launching workers. 00:21:37.723 ======================================================== 00:21:37.723 Latency(us) 00:21:37.723 Device Information : IOPS MiB/s Average min max 00:21:37.723 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2163.00 8.45 462.14 119.30 651.54 00:21:37.723 ======================================================== 00:21:37.723 Total : 2163.00 8.45 462.14 119.30 651.54 00:21:37.723 00:21:37.723 [2024-11-20 14:41:44.710258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c430d0 is same with the state(6) to be set 00:21:37.723 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3942797 00:21:37.981 Initializing NVMe Controllers 00:21:37.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:37.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:37.981 Initialization complete. Launching workers. 00:21:37.981 ======================================================== 00:21:37.981 Latency(us) 00:21:37.981 Device Information : IOPS MiB/s Average min max 00:21:37.981 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1633.00 6.38 612.47 273.93 752.65 00:21:37.981 ======================================================== 00:21:37.981 Total : 1633.00 6.38 612.47 273.93 752.65 00:21:37.981 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3942798 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.981 rmmod nvme_tcp 00:21:37.981 rmmod nvme_fabrics 00:21:37.981 rmmod nvme_keyring 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:37.981 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3942750 ']' 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3942750 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3942750 ']' 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3942750 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3942750 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3942750' 00:21:37.982 killing process with pid 3942750 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3942750 00:21:37.982 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3942750 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.982 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.517 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.517 00:21:40.517 real 0m9.281s 00:21:40.517 user 0m6.156s 00:21:40.517 sys 0m4.783s 00:21:40.517 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.517 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.517 ************************************ 00:21:40.517 END TEST nvmf_control_msg_list 00:21:40.517 ************************************ 00:21:40.517 14:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:40.517 14:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.517 14:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.517 14:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:40.518 ************************************ 00:21:40.518 START TEST nvmf_wait_for_buf 00:21:40.518 ************************************ 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:40.518 * Looking for test storage... 00:21:40.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:40.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.518 --rc genhtml_branch_coverage=1 00:21:40.518 --rc genhtml_function_coverage=1 00:21:40.518 --rc genhtml_legend=1 00:21:40.518 --rc geninfo_all_blocks=1 00:21:40.518 --rc geninfo_unexecuted_blocks=1 00:21:40.518 00:21:40.518 ' 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:40.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.518 --rc genhtml_branch_coverage=1 00:21:40.518 --rc genhtml_function_coverage=1 00:21:40.518 --rc genhtml_legend=1 00:21:40.518 --rc geninfo_all_blocks=1 00:21:40.518 --rc geninfo_unexecuted_blocks=1 00:21:40.518 00:21:40.518 ' 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:40.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.518 --rc genhtml_branch_coverage=1 00:21:40.518 --rc genhtml_function_coverage=1 00:21:40.518 --rc genhtml_legend=1 00:21:40.518 --rc geninfo_all_blocks=1 00:21:40.518 --rc geninfo_unexecuted_blocks=1 00:21:40.518 00:21:40.518 ' 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:40.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.518 --rc genhtml_branch_coverage=1 00:21:40.518 --rc genhtml_function_coverage=1 00:21:40.518 --rc genhtml_legend=1 00:21:40.518 --rc geninfo_all_blocks=1 00:21:40.518 --rc geninfo_unexecuted_blocks=1 00:21:40.518 00:21:40.518 ' 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:40.518 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.519 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:45.798 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:45.798 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:45.798 Found net devices under 0000:31:00.0: cvl_0_0 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:45.798 Found net devices under 0000:31:00.1: cvl_0_1 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.798 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:45.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:21:45.799 00:21:45.799 --- 10.0.0.2 ping statistics --- 00:21:45.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.799 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:21:45.799 00:21:45.799 --- 10.0.0.1 ping statistics --- 00:21:45.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.799 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3947443 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3947443 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3947443 ']' 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:45.799 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.799 [2024-11-20 14:41:52.595820] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:21:45.799 [2024-11-20 14:41:52.595887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.799 [2024-11-20 14:41:52.686460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.799 [2024-11-20 14:41:52.737215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.799 [2024-11-20 14:41:52.737276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.799 [2024-11-20 14:41:52.737286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.799 [2024-11-20 14:41:52.737293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.799 [2024-11-20 14:41:52.737299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.799 [2024-11-20 14:41:52.738090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.368 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.628 Malloc0 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.628 [2024-11-20 14:41:53.502514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.628 [2024-11-20 14:41:53.526694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.628 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:46.628 [2024-11-20 14:41:53.617317] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:48.009 Initializing NVMe Controllers 00:21:48.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:48.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:48.009 Initialization complete. Launching workers. 00:21:48.009 ======================================================== 00:21:48.009 Latency(us) 00:21:48.009 Device Information : IOPS MiB/s Average min max 00:21:48.009 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 166001.95 47873.55 191556.48 00:21:48.009 ======================================================== 00:21:48.009 Total : 25.00 3.12 166001.95 47873.55 191556.48 00:21:48.009 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.009 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.009 rmmod nvme_tcp 00:21:48.268 rmmod nvme_fabrics 00:21:48.268 rmmod nvme_keyring 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3947443 ']' 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3947443 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3947443 ']' 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3947443 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947443 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947443' 00:21:48.268 killing process with pid 3947443 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3947443 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3947443 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.268 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.269 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.269 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.269 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.269 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.850 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.850 00:21:50.850 real 0m10.228s 00:21:50.850 user 0m4.205s 00:21:50.850 sys 0m4.402s 00:21:50.850 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.850 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.850 ************************************ 00:21:50.850 END TEST nvmf_wait_for_buf 00:21:50.850 ************************************ 00:21:50.850 14:41:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:50.850 14:41:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:50.850 14:41:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:50.850 14:41:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:50.850 14:41:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.850 14:41:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:56.227 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:56.228 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:56.228 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:56.228 Found net devices under 0000:31:00.0: cvl_0_0 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:56.228 Found net devices under 0000:31:00.1: cvl_0_1 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:56.228 ************************************ 00:21:56.228 START TEST nvmf_perf_adq 00:21:56.228 ************************************ 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:56.228 * Looking for test storage... 00:21:56.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:56.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.228 --rc genhtml_branch_coverage=1 00:21:56.228 --rc genhtml_function_coverage=1 00:21:56.228 --rc genhtml_legend=1 00:21:56.228 --rc geninfo_all_blocks=1 00:21:56.228 --rc geninfo_unexecuted_blocks=1 00:21:56.228 00:21:56.228 ' 00:21:56.228 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:56.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.229 --rc genhtml_branch_coverage=1 00:21:56.229 --rc genhtml_function_coverage=1 00:21:56.229 --rc genhtml_legend=1 00:21:56.229 --rc geninfo_all_blocks=1 00:21:56.229 --rc geninfo_unexecuted_blocks=1 00:21:56.229 00:21:56.229 ' 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:56.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.229 --rc genhtml_branch_coverage=1 00:21:56.229 --rc genhtml_function_coverage=1 00:21:56.229 --rc genhtml_legend=1 00:21:56.229 --rc geninfo_all_blocks=1 00:21:56.229 --rc geninfo_unexecuted_blocks=1 00:21:56.229 00:21:56.229 ' 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:56.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.229 --rc genhtml_branch_coverage=1 00:21:56.229 --rc genhtml_function_coverage=1 00:21:56.229 --rc genhtml_legend=1 00:21:56.229 --rc geninfo_all_blocks=1 00:21:56.229 --rc geninfo_unexecuted_blocks=1 00:21:56.229 00:21:56.229 ' 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:56.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:56.229 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.570 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.570 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.570 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.570 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.570 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:01.571 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:01.571 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:01.571 Found net devices under 0000:31:00.0: cvl_0_0 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:01.571 Found net devices under 0000:31:00.1: cvl_0_1 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:01.571 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:02.511 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:04.415 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.692 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:09.693 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:09.693 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:09.693 Found net devices under 0000:31:00.0: cvl_0_0 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:09.693 Found net devices under 0000:31:00.1: cvl_0_1 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:22:09.693 00:22:09.693 --- 10.0.0.2 ping statistics --- 00:22:09.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.693 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:22:09.693 00:22:09.693 --- 10.0.0.1 ping statistics --- 00:22:09.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.693 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3958918 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3958918 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3958918 ']' 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.693 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.693 [2024-11-20 14:42:16.517997] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:22:09.693 [2024-11-20 14:42:16.518045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.693 [2024-11-20 14:42:16.604122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.693 [2024-11-20 14:42:16.646351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.694 [2024-11-20 14:42:16.646396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.694 [2024-11-20 14:42:16.646404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.694 [2024-11-20 14:42:16.646412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.694 [2024-11-20 14:42:16.646417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.694 [2024-11-20 14:42:16.648383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.694 [2024-11-20 14:42:16.648577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.694 [2024-11-20 14:42:16.648736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.694 [2024-11-20 14:42:16.648736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.263 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.263 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:10.263 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.263 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.263 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 [2024-11-20 14:42:17.474639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 Malloc1 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 [2024-11-20 14:42:17.537688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3958996 00:22:10.523 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:10.523 14:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:13.054 "tick_rate": 2400000000, 00:22:13.054 "poll_groups": [ 00:22:13.054 { 00:22:13.054 "name": "nvmf_tgt_poll_group_000", 00:22:13.054 "admin_qpairs": 1, 00:22:13.054 "io_qpairs": 1, 00:22:13.054 "current_admin_qpairs": 1, 00:22:13.054 "current_io_qpairs": 1, 00:22:13.054 "pending_bdev_io": 0, 00:22:13.054 "completed_nvme_io": 25992, 00:22:13.054 "transports": [ 00:22:13.054 { 00:22:13.054 "trtype": "TCP" 00:22:13.054 } 00:22:13.054 ] 00:22:13.054 }, 00:22:13.054 { 00:22:13.054 "name": "nvmf_tgt_poll_group_001", 00:22:13.054 "admin_qpairs": 0, 00:22:13.054 "io_qpairs": 1, 00:22:13.054 "current_admin_qpairs": 0, 00:22:13.054 "current_io_qpairs": 1, 00:22:13.054 "pending_bdev_io": 0, 00:22:13.054 "completed_nvme_io": 28299, 00:22:13.054 "transports": [ 00:22:13.054 { 00:22:13.054 "trtype": "TCP" 00:22:13.054 } 00:22:13.054 ] 00:22:13.054 }, 00:22:13.054 { 00:22:13.054 "name": "nvmf_tgt_poll_group_002", 00:22:13.054 "admin_qpairs": 0, 00:22:13.054 "io_qpairs": 1, 00:22:13.054 "current_admin_qpairs": 0, 00:22:13.054 "current_io_qpairs": 1, 00:22:13.054 "pending_bdev_io": 0, 00:22:13.054 "completed_nvme_io": 27241, 00:22:13.054 "transports": [ 00:22:13.054 { 00:22:13.054 "trtype": "TCP" 00:22:13.054 } 00:22:13.054 ] 00:22:13.054 }, 00:22:13.054 { 00:22:13.054 "name": "nvmf_tgt_poll_group_003", 00:22:13.054 "admin_qpairs": 0, 00:22:13.054 "io_qpairs": 1, 00:22:13.054 "current_admin_qpairs": 0, 00:22:13.054 "current_io_qpairs": 1, 00:22:13.054 "pending_bdev_io": 0, 00:22:13.054 "completed_nvme_io": 23005, 00:22:13.054 "transports": [ 00:22:13.054 { 00:22:13.054 "trtype": "TCP" 00:22:13.054 } 00:22:13.054 ] 00:22:13.054 } 00:22:13.054 ] 00:22:13.054 }' 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:13.054 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3958996 00:22:21.172 Initializing NVMe Controllers 00:22:21.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:21.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:21.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:21.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:21.172 Initialization complete. Launching workers. 00:22:21.172 ======================================================== 00:22:21.172 Latency(us) 00:22:21.172 Device Information : IOPS MiB/s Average min max 00:22:21.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13856.80 54.13 4618.87 1115.92 9246.91 00:22:21.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14578.60 56.95 4389.54 1150.42 9104.99 00:22:21.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14193.60 55.44 4508.90 1108.82 10653.29 00:22:21.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13664.70 53.38 4683.52 1086.95 9461.51 00:22:21.172 ======================================================== 00:22:21.172 Total : 56293.70 219.90 4547.45 1086.95 10653.29 00:22:21.172 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.172 rmmod nvme_tcp 00:22:21.172 rmmod nvme_fabrics 00:22:21.172 rmmod nvme_keyring 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3958918 ']' 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3958918 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3958918 ']' 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3958918 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3958918 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3958918' 00:22:21.172 killing process with pid 3958918 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3958918 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3958918 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.172 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.078 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.078 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:23.078 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:23.078 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:24.453 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:26.357 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:31.641 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:31.641 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.641 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:31.641 Found net devices under 0000:31:00.0: cvl_0_0 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:31.642 Found net devices under 0000:31:00.1: cvl_0_1 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:22:31.642 00:22:31.642 --- 10.0.0.2 ping statistics --- 00:22:31.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.642 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:22:31.642 00:22:31.642 --- 10.0.0.1 ping statistics --- 00:22:31.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.642 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:31.642 net.core.busy_poll = 1 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:31.642 net.core.busy_read = 1 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3964056 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3964056 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3964056 ']' 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.642 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:31.642 [2024-11-20 14:42:38.605783] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:22:31.642 [2024-11-20 14:42:38.605831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.642 [2024-11-20 14:42:38.679340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.903 [2024-11-20 14:42:38.710669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.903 [2024-11-20 14:42:38.710699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.903 [2024-11-20 14:42:38.710705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.903 [2024-11-20 14:42:38.710710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.903 [2024-11-20 14:42:38.710714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.904 [2024-11-20 14:42:38.712090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.904 [2024-11-20 14:42:38.712259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.904 [2024-11-20 14:42:38.712399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.904 [2024-11-20 14:42:38.712565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 [2024-11-20 14:42:38.881164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 Malloc1 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 [2024-11-20 14:42:38.929019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3964291 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:31.904 14:42:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:34.442 "tick_rate": 2400000000, 00:22:34.442 "poll_groups": [ 00:22:34.442 { 00:22:34.442 "name": "nvmf_tgt_poll_group_000", 00:22:34.442 "admin_qpairs": 1, 00:22:34.442 "io_qpairs": 2, 00:22:34.442 "current_admin_qpairs": 1, 00:22:34.442 "current_io_qpairs": 2, 00:22:34.442 "pending_bdev_io": 0, 00:22:34.442 "completed_nvme_io": 36930, 00:22:34.442 "transports": [ 00:22:34.442 { 00:22:34.442 "trtype": "TCP" 00:22:34.442 } 00:22:34.442 ] 00:22:34.442 }, 00:22:34.442 { 00:22:34.442 "name": "nvmf_tgt_poll_group_001", 00:22:34.442 "admin_qpairs": 0, 00:22:34.442 "io_qpairs": 2, 00:22:34.442 "current_admin_qpairs": 0, 00:22:34.442 "current_io_qpairs": 2, 00:22:34.442 "pending_bdev_io": 0, 00:22:34.442 "completed_nvme_io": 34701, 00:22:34.442 "transports": [ 00:22:34.442 { 00:22:34.442 "trtype": "TCP" 00:22:34.442 } 00:22:34.442 ] 00:22:34.442 }, 00:22:34.442 { 00:22:34.442 "name": "nvmf_tgt_poll_group_002", 00:22:34.442 "admin_qpairs": 0, 00:22:34.442 "io_qpairs": 0, 00:22:34.442 "current_admin_qpairs": 0, 00:22:34.442 "current_io_qpairs": 0, 00:22:34.442 "pending_bdev_io": 0, 00:22:34.442 "completed_nvme_io": 0, 00:22:34.442 "transports": [ 00:22:34.442 { 00:22:34.442 "trtype": "TCP" 00:22:34.442 } 00:22:34.442 ] 00:22:34.442 }, 00:22:34.442 { 00:22:34.442 "name": "nvmf_tgt_poll_group_003", 00:22:34.442 "admin_qpairs": 0, 00:22:34.442 "io_qpairs": 0, 00:22:34.442 "current_admin_qpairs": 0, 00:22:34.442 "current_io_qpairs": 0, 00:22:34.442 "pending_bdev_io": 0, 00:22:34.442 "completed_nvme_io": 0, 00:22:34.442 "transports": [ 00:22:34.442 { 00:22:34.442 "trtype": "TCP" 00:22:34.442 } 00:22:34.442 ] 00:22:34.442 } 00:22:34.442 ] 00:22:34.442 }' 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:34.442 14:42:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3964291 00:22:42.582 Initializing NVMe Controllers 00:22:42.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:42.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:42.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:42.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:42.582 Initialization complete. Launching workers. 00:22:42.582 ======================================================== 00:22:42.582 Latency(us) 00:22:42.582 Device Information : IOPS MiB/s Average min max 00:22:42.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13283.00 51.89 4817.97 1111.04 50859.86 00:22:42.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9518.90 37.18 6722.98 1127.88 49395.83 00:22:42.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7125.70 27.83 8989.32 1441.88 51101.63 00:22:42.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9370.90 36.61 6849.11 980.47 50214.45 00:22:42.582 ======================================================== 00:22:42.582 Total : 39298.50 153.51 6520.10 980.47 51101.63 00:22:42.582 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.582 rmmod nvme_tcp 00:22:42.582 rmmod nvme_fabrics 00:22:42.582 rmmod nvme_keyring 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3964056 ']' 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3964056 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3964056 ']' 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3964056 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3964056 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3964056' 00:22:42.582 killing process with pid 3964056 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3964056 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3964056 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:42.582 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.583 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.583 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.583 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.583 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:45.877 00:22:45.877 real 0m49.825s 00:22:45.877 user 2m44.745s 00:22:45.877 sys 0m9.300s 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.877 ************************************ 00:22:45.877 END TEST nvmf_perf_adq 00:22:45.877 ************************************ 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:45.877 ************************************ 00:22:45.877 START TEST nvmf_shutdown 00:22:45.877 ************************************ 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:45.877 * Looking for test storage... 00:22:45.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:45.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.877 --rc genhtml_branch_coverage=1 00:22:45.877 --rc genhtml_function_coverage=1 00:22:45.877 --rc genhtml_legend=1 00:22:45.877 --rc geninfo_all_blocks=1 00:22:45.877 --rc geninfo_unexecuted_blocks=1 00:22:45.877 00:22:45.877 ' 00:22:45.877 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:45.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.877 --rc genhtml_branch_coverage=1 00:22:45.877 --rc genhtml_function_coverage=1 00:22:45.877 --rc genhtml_legend=1 00:22:45.877 --rc geninfo_all_blocks=1 00:22:45.877 --rc geninfo_unexecuted_blocks=1 00:22:45.877 00:22:45.877 ' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:45.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.878 --rc genhtml_branch_coverage=1 00:22:45.878 --rc genhtml_function_coverage=1 00:22:45.878 --rc genhtml_legend=1 00:22:45.878 --rc geninfo_all_blocks=1 00:22:45.878 --rc geninfo_unexecuted_blocks=1 00:22:45.878 00:22:45.878 ' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:45.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.878 --rc genhtml_branch_coverage=1 00:22:45.878 --rc genhtml_function_coverage=1 00:22:45.878 --rc genhtml_legend=1 00:22:45.878 --rc geninfo_all_blocks=1 00:22:45.878 --rc geninfo_unexecuted_blocks=1 00:22:45.878 00:22:45.878 ' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:45.878 ************************************ 00:22:45.878 START TEST nvmf_shutdown_tc1 00:22:45.878 ************************************ 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.878 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:51.158 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:51.158 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.158 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:51.159 Found net devices under 0000:31:00.0: cvl_0_0 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:51.159 Found net devices under 0000:31:00.1: cvl_0_1 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:22:51.159 00:22:51.159 --- 10.0.0.2 ping statistics --- 00:22:51.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.159 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:22:51.159 00:22:51.159 --- 10.0.0.1 ping statistics --- 00:22:51.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.159 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3971197 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3971197 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3971197 ']' 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.159 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.159 [2024-11-20 14:42:57.985134] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:22:51.159 [2024-11-20 14:42:57.985193] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.159 [2024-11-20 14:42:58.077581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.159 [2024-11-20 14:42:58.129741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.159 [2024-11-20 14:42:58.129799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.159 [2024-11-20 14:42:58.129808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.159 [2024-11-20 14:42:58.129816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.159 [2024-11-20 14:42:58.129822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.159 [2024-11-20 14:42:58.131911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.159 [2024-11-20 14:42:58.132072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.159 [2024-11-20 14:42:58.132228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.159 [2024-11-20 14:42:58.132229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.729 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.729 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:51.729 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.729 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.729 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.988 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.988 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.988 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.988 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.988 [2024-11-20 14:42:58.799362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.988 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.988 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:51.988 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.989 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.989 Malloc1 00:22:51.989 [2024-11-20 14:42:58.889827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.989 Malloc2 00:22:51.989 Malloc3 00:22:51.989 Malloc4 00:22:51.989 Malloc5 00:22:52.249 Malloc6 00:22:52.249 Malloc7 00:22:52.249 Malloc8 00:22:52.249 Malloc9 00:22:52.249 Malloc10 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3971577 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3971577 /var/tmp/bdevperf.sock 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3971577 ']' 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.249 { 00:22:52.249 "params": { 00:22:52.249 "name": "Nvme$subsystem", 00:22:52.249 "trtype": "$TEST_TRANSPORT", 00:22:52.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.249 "adrfam": "ipv4", 00:22:52.249 "trsvcid": "$NVMF_PORT", 00:22:52.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.249 "hdgst": ${hdgst:-false}, 00:22:52.249 "ddgst": ${ddgst:-false} 00:22:52.249 }, 00:22:52.249 "method": "bdev_nvme_attach_controller" 00:22:52.249 } 00:22:52.249 EOF 00:22:52.249 )") 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.249 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.249 { 00:22:52.250 "params": { 00:22:52.250 "name": "Nvme$subsystem", 00:22:52.250 "trtype": "$TEST_TRANSPORT", 00:22:52.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.250 "adrfam": "ipv4", 00:22:52.250 "trsvcid": "$NVMF_PORT", 00:22:52.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.250 "hdgst": ${hdgst:-false}, 00:22:52.250 "ddgst": ${ddgst:-false} 00:22:52.250 }, 00:22:52.250 "method": "bdev_nvme_attach_controller" 00:22:52.250 } 00:22:52.250 EOF 00:22:52.250 )") 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.250 { 00:22:52.250 "params": { 00:22:52.250 "name": "Nvme$subsystem", 00:22:52.250 "trtype": "$TEST_TRANSPORT", 00:22:52.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.250 "adrfam": "ipv4", 00:22:52.250 "trsvcid": "$NVMF_PORT", 00:22:52.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.250 "hdgst": ${hdgst:-false}, 00:22:52.250 "ddgst": ${ddgst:-false} 00:22:52.250 }, 00:22:52.250 "method": "bdev_nvme_attach_controller" 00:22:52.250 } 00:22:52.250 EOF 00:22:52.250 )") 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.250 { 00:22:52.250 "params": { 00:22:52.250 "name": "Nvme$subsystem", 00:22:52.250 "trtype": "$TEST_TRANSPORT", 00:22:52.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.250 "adrfam": "ipv4", 00:22:52.250 "trsvcid": "$NVMF_PORT", 00:22:52.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.250 "hdgst": ${hdgst:-false}, 00:22:52.250 "ddgst": ${ddgst:-false} 00:22:52.250 }, 00:22:52.250 "method": "bdev_nvme_attach_controller" 00:22:52.250 } 00:22:52.250 EOF 00:22:52.250 )") 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.250 { 00:22:52.250 "params": { 00:22:52.250 "name": "Nvme$subsystem", 00:22:52.250 "trtype": "$TEST_TRANSPORT", 00:22:52.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.250 "adrfam": "ipv4", 00:22:52.250 "trsvcid": "$NVMF_PORT", 00:22:52.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.250 "hdgst": ${hdgst:-false}, 00:22:52.250 "ddgst": ${ddgst:-false} 00:22:52.250 }, 00:22:52.250 "method": "bdev_nvme_attach_controller" 00:22:52.250 } 00:22:52.250 EOF 00:22:52.250 )") 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.250 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.250 { 00:22:52.250 "params": { 00:22:52.250 "name": "Nvme$subsystem", 00:22:52.250 "trtype": "$TEST_TRANSPORT", 00:22:52.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.250 "adrfam": "ipv4", 00:22:52.250 "trsvcid": "$NVMF_PORT", 00:22:52.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.250 "hdgst": ${hdgst:-false}, 00:22:52.250 "ddgst": ${ddgst:-false} 00:22:52.250 }, 00:22:52.250 "method": "bdev_nvme_attach_controller" 00:22:52.250 } 00:22:52.250 EOF 00:22:52.250 )") 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.510 [2024-11-20 14:42:59.314098] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:22:52.510 [2024-11-20 14:42:59.314150] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.510 { 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme$subsystem", 00:22:52.510 "trtype": "$TEST_TRANSPORT", 00:22:52.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "$NVMF_PORT", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.510 "hdgst": ${hdgst:-false}, 00:22:52.510 "ddgst": ${ddgst:-false} 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 } 00:22:52.510 EOF 00:22:52.510 )") 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.510 { 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme$subsystem", 00:22:52.510 "trtype": "$TEST_TRANSPORT", 00:22:52.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "$NVMF_PORT", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.510 "hdgst": ${hdgst:-false}, 00:22:52.510 "ddgst": ${ddgst:-false} 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 } 00:22:52.510 EOF 00:22:52.510 )") 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.510 { 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme$subsystem", 00:22:52.510 "trtype": "$TEST_TRANSPORT", 00:22:52.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "$NVMF_PORT", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.510 "hdgst": ${hdgst:-false}, 00:22:52.510 "ddgst": ${ddgst:-false} 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 } 00:22:52.510 EOF 00:22:52.510 )") 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.510 { 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme$subsystem", 00:22:52.510 "trtype": "$TEST_TRANSPORT", 00:22:52.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "$NVMF_PORT", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.510 "hdgst": ${hdgst:-false}, 00:22:52.510 "ddgst": ${ddgst:-false} 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 } 00:22:52.510 EOF 00:22:52.510 )") 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:52.510 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme1", 00:22:52.510 "trtype": "tcp", 00:22:52.510 "traddr": "10.0.0.2", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "4420", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.510 "hdgst": false, 00:22:52.510 "ddgst": false 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 },{ 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme2", 00:22:52.510 "trtype": "tcp", 00:22:52.510 "traddr": "10.0.0.2", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "4420", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:52.510 "hdgst": false, 00:22:52.510 "ddgst": false 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 },{ 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme3", 00:22:52.510 "trtype": "tcp", 00:22:52.510 "traddr": "10.0.0.2", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "4420", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:52.510 "hdgst": false, 00:22:52.510 "ddgst": false 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 },{ 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme4", 00:22:52.510 "trtype": "tcp", 00:22:52.510 "traddr": "10.0.0.2", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "4420", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:52.510 "hdgst": false, 00:22:52.510 "ddgst": false 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 },{ 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme5", 00:22:52.510 "trtype": "tcp", 00:22:52.510 "traddr": "10.0.0.2", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "4420", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:52.510 "hdgst": false, 00:22:52.510 "ddgst": false 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 },{ 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme6", 00:22:52.510 "trtype": "tcp", 00:22:52.510 "traddr": "10.0.0.2", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "4420", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:52.510 "hdgst": false, 00:22:52.510 "ddgst": false 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.510 },{ 00:22:52.510 "params": { 00:22:52.510 "name": "Nvme7", 00:22:52.510 "trtype": "tcp", 00:22:52.510 "traddr": "10.0.0.2", 00:22:52.510 "adrfam": "ipv4", 00:22:52.510 "trsvcid": "4420", 00:22:52.510 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:52.510 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:52.510 "hdgst": false, 00:22:52.510 "ddgst": false 00:22:52.510 }, 00:22:52.510 "method": "bdev_nvme_attach_controller" 00:22:52.511 },{ 00:22:52.511 "params": { 00:22:52.511 "name": "Nvme8", 00:22:52.511 "trtype": "tcp", 00:22:52.511 "traddr": "10.0.0.2", 00:22:52.511 "adrfam": "ipv4", 00:22:52.511 "trsvcid": "4420", 00:22:52.511 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:52.511 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:52.511 "hdgst": false, 00:22:52.511 "ddgst": false 00:22:52.511 }, 00:22:52.511 "method": "bdev_nvme_attach_controller" 00:22:52.511 },{ 00:22:52.511 "params": { 00:22:52.511 "name": "Nvme9", 00:22:52.511 "trtype": "tcp", 00:22:52.511 "traddr": "10.0.0.2", 00:22:52.511 "adrfam": "ipv4", 00:22:52.511 "trsvcid": "4420", 00:22:52.511 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:52.511 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:52.511 "hdgst": false, 00:22:52.511 "ddgst": false 00:22:52.511 }, 00:22:52.511 "method": "bdev_nvme_attach_controller" 00:22:52.511 },{ 00:22:52.511 "params": { 00:22:52.511 "name": "Nvme10", 00:22:52.511 "trtype": "tcp", 00:22:52.511 "traddr": "10.0.0.2", 00:22:52.511 "adrfam": "ipv4", 00:22:52.511 "trsvcid": "4420", 00:22:52.511 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:52.511 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:52.511 "hdgst": false, 00:22:52.511 "ddgst": false 00:22:52.511 }, 00:22:52.511 "method": "bdev_nvme_attach_controller" 00:22:52.511 }' 00:22:52.511 [2024-11-20 14:42:59.394529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.511 [2024-11-20 14:42:59.431737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.974 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.974 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:53.974 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:53.975 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.975 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:53.975 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.975 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3971577 00:22:53.975 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:53.975 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:54.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3971577 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3971197 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.916 { 00:22:54.916 "params": { 00:22:54.916 "name": "Nvme$subsystem", 00:22:54.916 "trtype": "$TEST_TRANSPORT", 00:22:54.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.916 "adrfam": "ipv4", 00:22:54.916 "trsvcid": "$NVMF_PORT", 00:22:54.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.916 "hdgst": ${hdgst:-false}, 00:22:54.916 "ddgst": ${ddgst:-false} 00:22:54.916 }, 00:22:54.916 "method": "bdev_nvme_attach_controller" 00:22:54.916 } 00:22:54.916 EOF 00:22:54.916 )") 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.916 { 00:22:54.916 "params": { 00:22:54.916 "name": "Nvme$subsystem", 00:22:54.916 "trtype": "$TEST_TRANSPORT", 00:22:54.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.916 "adrfam": "ipv4", 00:22:54.916 "trsvcid": "$NVMF_PORT", 00:22:54.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.916 "hdgst": ${hdgst:-false}, 00:22:54.916 "ddgst": ${ddgst:-false} 00:22:54.916 }, 00:22:54.916 "method": "bdev_nvme_attach_controller" 00:22:54.916 } 00:22:54.916 EOF 00:22:54.916 )") 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.916 { 00:22:54.916 "params": { 00:22:54.916 "name": "Nvme$subsystem", 00:22:54.916 "trtype": "$TEST_TRANSPORT", 00:22:54.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.916 "adrfam": "ipv4", 00:22:54.916 "trsvcid": "$NVMF_PORT", 00:22:54.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.916 "hdgst": ${hdgst:-false}, 00:22:54.916 "ddgst": ${ddgst:-false} 00:22:54.916 }, 00:22:54.916 "method": "bdev_nvme_attach_controller" 00:22:54.916 } 00:22:54.916 EOF 00:22:54.916 )") 00:22:54.916 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.917 { 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme$subsystem", 00:22:54.917 "trtype": "$TEST_TRANSPORT", 00:22:54.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "$NVMF_PORT", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.917 "hdgst": ${hdgst:-false}, 00:22:54.917 "ddgst": ${ddgst:-false} 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 } 00:22:54.917 EOF 00:22:54.917 )") 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.917 { 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme$subsystem", 00:22:54.917 "trtype": "$TEST_TRANSPORT", 00:22:54.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "$NVMF_PORT", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.917 "hdgst": ${hdgst:-false}, 00:22:54.917 "ddgst": ${ddgst:-false} 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 } 00:22:54.917 EOF 00:22:54.917 )") 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.917 { 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme$subsystem", 00:22:54.917 "trtype": "$TEST_TRANSPORT", 00:22:54.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "$NVMF_PORT", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.917 "hdgst": ${hdgst:-false}, 00:22:54.917 "ddgst": ${ddgst:-false} 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 } 00:22:54.917 EOF 00:22:54.917 )") 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.917 { 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme$subsystem", 00:22:54.917 "trtype": "$TEST_TRANSPORT", 00:22:54.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "$NVMF_PORT", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.917 "hdgst": ${hdgst:-false}, 00:22:54.917 "ddgst": ${ddgst:-false} 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 } 00:22:54.917 EOF 00:22:54.917 )") 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.917 [2024-11-20 14:43:01.686822] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:22:54.917 [2024-11-20 14:43:01.686874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3971969 ] 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.917 { 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme$subsystem", 00:22:54.917 "trtype": "$TEST_TRANSPORT", 00:22:54.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "$NVMF_PORT", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.917 "hdgst": ${hdgst:-false}, 00:22:54.917 "ddgst": ${ddgst:-false} 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 } 00:22:54.917 EOF 00:22:54.917 )") 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.917 { 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme$subsystem", 00:22:54.917 "trtype": "$TEST_TRANSPORT", 00:22:54.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "$NVMF_PORT", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.917 "hdgst": ${hdgst:-false}, 00:22:54.917 "ddgst": ${ddgst:-false} 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 } 00:22:54.917 EOF 00:22:54.917 )") 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.917 { 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme$subsystem", 00:22:54.917 "trtype": "$TEST_TRANSPORT", 00:22:54.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "$NVMF_PORT", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.917 "hdgst": ${hdgst:-false}, 00:22:54.917 "ddgst": ${ddgst:-false} 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 } 00:22:54.917 EOF 00:22:54.917 )") 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:54.917 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme1", 00:22:54.917 "trtype": "tcp", 00:22:54.917 "traddr": "10.0.0.2", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "4420", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.917 "hdgst": false, 00:22:54.917 "ddgst": false 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 },{ 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme2", 00:22:54.917 "trtype": "tcp", 00:22:54.917 "traddr": "10.0.0.2", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "4420", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:54.917 "hdgst": false, 00:22:54.917 "ddgst": false 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 },{ 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme3", 00:22:54.917 "trtype": "tcp", 00:22:54.917 "traddr": "10.0.0.2", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "4420", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:54.917 "hdgst": false, 00:22:54.917 "ddgst": false 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 },{ 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme4", 00:22:54.917 "trtype": "tcp", 00:22:54.917 "traddr": "10.0.0.2", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "4420", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:54.917 "hdgst": false, 00:22:54.917 "ddgst": false 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 },{ 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme5", 00:22:54.917 "trtype": "tcp", 00:22:54.917 "traddr": "10.0.0.2", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "4420", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:54.917 "hdgst": false, 00:22:54.917 "ddgst": false 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 },{ 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme6", 00:22:54.917 "trtype": "tcp", 00:22:54.917 "traddr": "10.0.0.2", 00:22:54.917 "adrfam": "ipv4", 00:22:54.917 "trsvcid": "4420", 00:22:54.917 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:54.917 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:54.917 "hdgst": false, 00:22:54.917 "ddgst": false 00:22:54.917 }, 00:22:54.917 "method": "bdev_nvme_attach_controller" 00:22:54.917 },{ 00:22:54.917 "params": { 00:22:54.917 "name": "Nvme7", 00:22:54.917 "trtype": "tcp", 00:22:54.918 "traddr": "10.0.0.2", 00:22:54.918 "adrfam": "ipv4", 00:22:54.918 "trsvcid": "4420", 00:22:54.918 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:54.918 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:54.918 "hdgst": false, 00:22:54.918 "ddgst": false 00:22:54.918 }, 00:22:54.918 "method": "bdev_nvme_attach_controller" 00:22:54.918 },{ 00:22:54.918 "params": { 00:22:54.918 "name": "Nvme8", 00:22:54.918 "trtype": "tcp", 00:22:54.918 "traddr": "10.0.0.2", 00:22:54.918 "adrfam": "ipv4", 00:22:54.918 "trsvcid": "4420", 00:22:54.918 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:54.918 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:54.918 "hdgst": false, 00:22:54.918 "ddgst": false 00:22:54.918 }, 00:22:54.918 "method": "bdev_nvme_attach_controller" 00:22:54.918 },{ 00:22:54.918 "params": { 00:22:54.918 "name": "Nvme9", 00:22:54.918 "trtype": "tcp", 00:22:54.918 "traddr": "10.0.0.2", 00:22:54.918 "adrfam": "ipv4", 00:22:54.918 "trsvcid": "4420", 00:22:54.918 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:54.918 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:54.918 "hdgst": false, 00:22:54.918 "ddgst": false 00:22:54.918 }, 00:22:54.918 "method": "bdev_nvme_attach_controller" 00:22:54.918 },{ 00:22:54.918 "params": { 00:22:54.918 "name": "Nvme10", 00:22:54.918 "trtype": "tcp", 00:22:54.918 "traddr": "10.0.0.2", 00:22:54.918 "adrfam": "ipv4", 00:22:54.918 "trsvcid": "4420", 00:22:54.918 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:54.918 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:54.918 "hdgst": false, 00:22:54.918 "ddgst": false 00:22:54.918 }, 00:22:54.918 "method": "bdev_nvme_attach_controller" 00:22:54.918 }' 00:22:54.918 [2024-11-20 14:43:01.765564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.918 [2024-11-20 14:43:01.801498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.320 Running I/O for 1 seconds... 00:22:57.695 2465.00 IOPS, 154.06 MiB/s 00:22:57.695 Latency(us) 00:22:57.695 [2024-11-20T13:43:04.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.695 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme1n1 : 1.10 305.75 19.11 0.00 0.00 204056.32 17803.95 214084.27 00:22:57.695 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme2n1 : 1.11 288.84 18.05 0.00 0.00 215395.67 15400.96 212336.64 00:22:57.695 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme3n1 : 1.08 297.33 18.58 0.00 0.00 205181.53 9885.01 189617.49 00:22:57.695 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme4n1 : 1.09 294.86 18.43 0.00 0.00 203116.20 19333.12 205346.13 00:22:57.695 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme5n1 : 1.11 293.25 18.33 0.00 0.00 200767.37 5898.24 216705.71 00:22:57.695 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme6n1 : 1.13 282.74 17.67 0.00 0.00 204650.50 14964.05 191365.12 00:22:57.695 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme7n1 : 1.17 327.31 20.46 0.00 0.00 174208.14 12888.75 191365.12 00:22:57.695 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme8n1 : 1.17 382.87 23.93 0.00 0.00 146061.84 10431.15 185248.43 00:22:57.695 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme9n1 : 1.18 329.04 20.56 0.00 0.00 166977.94 7536.64 227191.47 00:22:57.695 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.695 Verification LBA range: start 0x0 length 0x400 00:22:57.695 Nvme10n1 : 1.19 323.54 20.22 0.00 0.00 166902.04 10158.08 246415.36 00:22:57.695 [2024-11-20T13:43:04.755Z] =================================================================================================================== 00:22:57.695 [2024-11-20T13:43:04.755Z] Total : 3125.51 195.34 0.00 0.00 186212.76 5898.24 246415.36 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.695 rmmod nvme_tcp 00:22:57.695 rmmod nvme_fabrics 00:22:57.695 rmmod nvme_keyring 00:22:57.695 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.696 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:57.696 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:57.696 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3971197 ']' 00:22:57.696 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3971197 00:22:57.696 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3971197 ']' 00:22:57.696 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3971197 00:22:57.696 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:57.696 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.696 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3971197 00:22:57.954 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.954 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.954 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3971197' 00:22:57.954 killing process with pid 3971197 00:22:57.954 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3971197 00:22:57.954 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3971197 00:22:57.954 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.954 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.954 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.954 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:57.954 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.954 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:57.954 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.955 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.955 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.955 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.955 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.955 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.491 00:23:00.491 real 0m14.467s 00:23:00.491 user 0m32.839s 00:23:00.491 sys 0m5.080s 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.491 ************************************ 00:23:00.491 END TEST nvmf_shutdown_tc1 00:23:00.491 ************************************ 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.491 ************************************ 00:23:00.491 START TEST nvmf_shutdown_tc2 00:23:00.491 ************************************ 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.491 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:00.492 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:00.492 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:00.492 Found net devices under 0000:31:00.0: cvl_0_0 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:00.492 Found net devices under 0000:31:00.1: cvl_0_1 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:00.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:23:00.492 00:23:00.492 --- 10.0.0.2 ping statistics --- 00:23:00.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.492 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:23:00.492 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:23:00.492 00:23:00.492 --- 10.0.0.1 ping statistics --- 00:23:00.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.492 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3973389 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3973389 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3973389 ']' 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.493 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.493 [2024-11-20 14:43:07.412209] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:00.493 [2024-11-20 14:43:07.412300] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.493 [2024-11-20 14:43:07.489521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.493 [2024-11-20 14:43:07.526804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.493 [2024-11-20 14:43:07.526841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.493 [2024-11-20 14:43:07.526847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.493 [2024-11-20 14:43:07.526852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.493 [2024-11-20 14:43:07.526856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.493 [2024-11-20 14:43:07.528319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.493 [2024-11-20 14:43:07.528614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.493 [2024-11-20 14:43:07.528771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.493 [2024-11-20 14:43:07.528771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.430 [2024-11-20 14:43:08.229833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.430 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.430 Malloc1 00:23:01.430 [2024-11-20 14:43:08.317831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.430 Malloc2 00:23:01.430 Malloc3 00:23:01.430 Malloc4 00:23:01.430 Malloc5 00:23:01.430 Malloc6 00:23:01.691 Malloc7 00:23:01.691 Malloc8 00:23:01.691 Malloc9 00:23:01.691 Malloc10 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3973770 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3973770 /var/tmp/bdevperf.sock 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3973770 ']' 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.691 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.691 { 00:23:01.691 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.692 { 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.692 { 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.692 { 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.692 { 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.692 { 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.692 { 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 [2024-11-20 14:43:08.730669] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:01.692 [2024-11-20 14:43:08.730720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973770 ] 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.692 { 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.692 { 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.692 { 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme$subsystem", 00:23:01.692 "trtype": "$TEST_TRANSPORT", 00:23:01.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "$NVMF_PORT", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.692 "hdgst": ${hdgst:-false}, 00:23:01.692 "ddgst": ${ddgst:-false} 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 } 00:23:01.692 EOF 00:23:01.692 )") 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:01.692 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme1", 00:23:01.692 "trtype": "tcp", 00:23:01.692 "traddr": "10.0.0.2", 00:23:01.692 "adrfam": "ipv4", 00:23:01.692 "trsvcid": "4420", 00:23:01.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.692 "hdgst": false, 00:23:01.692 "ddgst": false 00:23:01.692 }, 00:23:01.692 "method": "bdev_nvme_attach_controller" 00:23:01.692 },{ 00:23:01.692 "params": { 00:23:01.692 "name": "Nvme2", 00:23:01.693 "trtype": "tcp", 00:23:01.693 "traddr": "10.0.0.2", 00:23:01.693 "adrfam": "ipv4", 00:23:01.693 "trsvcid": "4420", 00:23:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.693 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.693 "hdgst": false, 00:23:01.693 "ddgst": false 00:23:01.693 }, 00:23:01.693 "method": "bdev_nvme_attach_controller" 00:23:01.693 },{ 00:23:01.693 "params": { 00:23:01.693 "name": "Nvme3", 00:23:01.693 "trtype": "tcp", 00:23:01.693 "traddr": "10.0.0.2", 00:23:01.693 "adrfam": "ipv4", 00:23:01.693 "trsvcid": "4420", 00:23:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.693 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.693 "hdgst": false, 00:23:01.693 "ddgst": false 00:23:01.693 }, 00:23:01.693 "method": "bdev_nvme_attach_controller" 00:23:01.693 },{ 00:23:01.693 "params": { 00:23:01.693 "name": "Nvme4", 00:23:01.693 "trtype": "tcp", 00:23:01.693 "traddr": "10.0.0.2", 00:23:01.693 "adrfam": "ipv4", 00:23:01.693 "trsvcid": "4420", 00:23:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.693 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.693 "hdgst": false, 00:23:01.693 "ddgst": false 00:23:01.693 }, 00:23:01.693 "method": "bdev_nvme_attach_controller" 00:23:01.693 },{ 00:23:01.693 "params": { 00:23:01.693 "name": "Nvme5", 00:23:01.693 "trtype": "tcp", 00:23:01.693 "traddr": "10.0.0.2", 00:23:01.693 "adrfam": "ipv4", 00:23:01.693 "trsvcid": "4420", 00:23:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.693 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.693 "hdgst": false, 00:23:01.693 "ddgst": false 00:23:01.693 }, 00:23:01.693 "method": "bdev_nvme_attach_controller" 00:23:01.693 },{ 00:23:01.693 "params": { 00:23:01.693 "name": "Nvme6", 00:23:01.693 "trtype": "tcp", 00:23:01.693 "traddr": "10.0.0.2", 00:23:01.693 "adrfam": "ipv4", 00:23:01.693 "trsvcid": "4420", 00:23:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.693 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.693 "hdgst": false, 00:23:01.693 "ddgst": false 00:23:01.693 }, 00:23:01.693 "method": "bdev_nvme_attach_controller" 00:23:01.693 },{ 00:23:01.693 "params": { 00:23:01.693 "name": "Nvme7", 00:23:01.693 "trtype": "tcp", 00:23:01.693 "traddr": "10.0.0.2", 00:23:01.693 "adrfam": "ipv4", 00:23:01.693 "trsvcid": "4420", 00:23:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.693 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.693 "hdgst": false, 00:23:01.693 "ddgst": false 00:23:01.693 }, 00:23:01.693 "method": "bdev_nvme_attach_controller" 00:23:01.693 },{ 00:23:01.693 "params": { 00:23:01.693 "name": "Nvme8", 00:23:01.693 "trtype": "tcp", 00:23:01.693 "traddr": "10.0.0.2", 00:23:01.693 "adrfam": "ipv4", 00:23:01.693 "trsvcid": "4420", 00:23:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.693 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.693 "hdgst": false, 00:23:01.693 "ddgst": false 00:23:01.693 }, 00:23:01.693 "method": "bdev_nvme_attach_controller" 00:23:01.693 },{ 00:23:01.693 "params": { 00:23:01.693 "name": "Nvme9", 00:23:01.693 "trtype": "tcp", 00:23:01.693 "traddr": "10.0.0.2", 00:23:01.693 "adrfam": "ipv4", 00:23:01.693 "trsvcid": "4420", 00:23:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.693 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.693 "hdgst": false, 00:23:01.693 "ddgst": false 00:23:01.693 }, 00:23:01.693 "method": "bdev_nvme_attach_controller" 00:23:01.693 },{ 00:23:01.693 "params": { 00:23:01.693 "name": "Nvme10", 00:23:01.693 "trtype": "tcp", 00:23:01.693 "traddr": "10.0.0.2", 00:23:01.693 "adrfam": "ipv4", 00:23:01.693 "trsvcid": "4420", 00:23:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.693 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.693 "hdgst": false, 00:23:01.693 "ddgst": false 00:23:01.693 }, 00:23:01.693 "method": "bdev_nvme_attach_controller" 00:23:01.693 }' 00:23:01.953 [2024-11-20 14:43:08.796714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.953 [2024-11-20 14:43:08.827199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.332 Running I/O for 10 seconds... 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3973770 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3973770 ']' 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3973770 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3973770 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3973770' 00:23:03.591 killing process with pid 3973770 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3973770 00:23:03.591 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3973770 00:23:03.850 Received shutdown signal, test time was about 0.604469 seconds 00:23:03.850 00:23:03.850 Latency(us) 00:23:03.850 [2024-11-20T13:43:10.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.850 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.850 Verification LBA range: start 0x0 length 0x400 00:23:03.850 Nvme1n1 : 0.58 332.99 20.81 0.00 0.00 189286.97 14636.37 159034.03 00:23:03.850 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.850 Verification LBA range: start 0x0 length 0x400 00:23:03.850 Nvme2n1 : 0.60 322.42 20.15 0.00 0.00 191394.99 21299.20 175636.48 00:23:03.850 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.851 Verification LBA range: start 0x0 length 0x400 00:23:03.851 Nvme3n1 : 0.57 335.55 20.97 0.00 0.00 179021.37 14308.69 170393.60 00:23:03.851 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.851 Verification LBA range: start 0x0 length 0x400 00:23:03.851 Nvme4n1 : 0.60 428.99 26.81 0.00 0.00 136598.08 12069.55 176510.29 00:23:03.851 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.851 Verification LBA range: start 0x0 length 0x400 00:23:03.851 Nvme5n1 : 0.60 318.04 19.88 0.00 0.00 179611.88 15619.41 178257.92 00:23:03.851 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.851 Verification LBA range: start 0x0 length 0x400 00:23:03.851 Nvme6n1 : 0.59 327.99 20.50 0.00 0.00 170170.03 13707.95 169519.79 00:23:03.851 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.851 Verification LBA range: start 0x0 length 0x400 00:23:03.851 Nvme7n1 : 0.59 324.05 20.25 0.00 0.00 168112.92 15728.64 177384.11 00:23:03.851 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.851 Verification LBA range: start 0x0 length 0x400 00:23:03.851 Nvme8n1 : 0.59 326.75 20.42 0.00 0.00 162052.55 26105.17 160781.65 00:23:03.851 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.851 Verification LBA range: start 0x0 length 0x400 00:23:03.851 Nvme9n1 : 0.60 318.84 19.93 0.00 0.00 162190.79 16384.00 177384.11 00:23:03.851 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.851 Verification LBA range: start 0x0 length 0x400 00:23:03.851 Nvme10n1 : 0.60 319.58 19.97 0.00 0.00 157387.38 14417.92 196608.00 00:23:03.851 [2024-11-20T13:43:10.911Z] =================================================================================================================== 00:23:03.851 [2024-11-20T13:43:10.911Z] Total : 3355.21 209.70 0.00 0.00 168518.68 12069.55 196608.00 00:23:03.851 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3973389 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.787 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.787 rmmod nvme_tcp 00:23:05.046 rmmod nvme_fabrics 00:23:05.046 rmmod nvme_keyring 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3973389 ']' 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3973389 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3973389 ']' 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3973389 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3973389 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3973389' 00:23:05.046 killing process with pid 3973389 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3973389 00:23:05.046 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3973389 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.306 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.214 00:23:07.214 real 0m7.111s 00:23:07.214 user 0m20.639s 00:23:07.214 sys 0m0.963s 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.214 ************************************ 00:23:07.214 END TEST nvmf_shutdown_tc2 00:23:07.214 ************************************ 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.214 ************************************ 00:23:07.214 START TEST nvmf_shutdown_tc3 00:23:07.214 ************************************ 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.214 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:07.215 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:07.215 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:07.215 Found net devices under 0000:31:00.0: cvl_0_0 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:07.215 Found net devices under 0000:31:00.1: cvl_0_1 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.215 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:23:07.474 00:23:07.474 --- 10.0.0.2 ping statistics --- 00:23:07.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.474 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:23:07.474 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:23:07.475 00:23:07.475 --- 10.0.0.1 ping statistics --- 00:23:07.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.475 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3975217 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3975217 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3975217 ']' 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.475 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:07.735 [2024-11-20 14:43:14.564740] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:07.735 [2024-11-20 14:43:14.564790] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.735 [2024-11-20 14:43:14.635138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.735 [2024-11-20 14:43:14.664506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.735 [2024-11-20 14:43:14.664534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.735 [2024-11-20 14:43:14.664540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.735 [2024-11-20 14:43:14.664546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.735 [2024-11-20 14:43:14.664550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.735 [2024-11-20 14:43:14.666042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.735 [2024-11-20 14:43:14.666161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.735 [2024-11-20 14:43:14.666310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:07.735 [2024-11-20 14:43:14.666468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.306 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.306 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:08.306 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.306 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.306 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.306 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.306 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.306 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.306 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.565 [2024-11-20 14:43:15.368157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.565 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.565 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:08.565 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:08.565 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.566 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.566 Malloc1 00:23:08.566 [2024-11-20 14:43:15.453865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.566 Malloc2 00:23:08.566 Malloc3 00:23:08.566 Malloc4 00:23:08.566 Malloc5 00:23:08.566 Malloc6 00:23:08.827 Malloc7 00:23:08.827 Malloc8 00:23:08.827 Malloc9 00:23:08.827 Malloc10 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3975599 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3975599 /var/tmp/bdevperf.sock 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3975599 ']' 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.827 { 00:23:08.827 "params": { 00:23:08.827 "name": "Nvme$subsystem", 00:23:08.827 "trtype": "$TEST_TRANSPORT", 00:23:08.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.827 "adrfam": "ipv4", 00:23:08.827 "trsvcid": "$NVMF_PORT", 00:23:08.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.827 "hdgst": ${hdgst:-false}, 00:23:08.827 "ddgst": ${ddgst:-false} 00:23:08.827 }, 00:23:08.827 "method": "bdev_nvme_attach_controller" 00:23:08.827 } 00:23:08.827 EOF 00:23:08.827 )") 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.827 { 00:23:08.827 "params": { 00:23:08.827 "name": "Nvme$subsystem", 00:23:08.827 "trtype": "$TEST_TRANSPORT", 00:23:08.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.827 "adrfam": "ipv4", 00:23:08.827 "trsvcid": "$NVMF_PORT", 00:23:08.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.827 "hdgst": ${hdgst:-false}, 00:23:08.827 "ddgst": ${ddgst:-false} 00:23:08.827 }, 00:23:08.827 "method": "bdev_nvme_attach_controller" 00:23:08.827 } 00:23:08.827 EOF 00:23:08.827 )") 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.827 { 00:23:08.827 "params": { 00:23:08.827 "name": "Nvme$subsystem", 00:23:08.827 "trtype": "$TEST_TRANSPORT", 00:23:08.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.827 "adrfam": "ipv4", 00:23:08.827 "trsvcid": "$NVMF_PORT", 00:23:08.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.827 "hdgst": ${hdgst:-false}, 00:23:08.827 "ddgst": ${ddgst:-false} 00:23:08.827 }, 00:23:08.827 "method": "bdev_nvme_attach_controller" 00:23:08.827 } 00:23:08.827 EOF 00:23:08.827 )") 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.827 { 00:23:08.827 "params": { 00:23:08.827 "name": "Nvme$subsystem", 00:23:08.827 "trtype": "$TEST_TRANSPORT", 00:23:08.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.827 "adrfam": "ipv4", 00:23:08.827 "trsvcid": "$NVMF_PORT", 00:23:08.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.827 "hdgst": ${hdgst:-false}, 00:23:08.827 "ddgst": ${ddgst:-false} 00:23:08.827 }, 00:23:08.827 "method": "bdev_nvme_attach_controller" 00:23:08.827 } 00:23:08.827 EOF 00:23:08.827 )") 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.827 { 00:23:08.827 "params": { 00:23:08.827 "name": "Nvme$subsystem", 00:23:08.827 "trtype": "$TEST_TRANSPORT", 00:23:08.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.827 "adrfam": "ipv4", 00:23:08.827 "trsvcid": "$NVMF_PORT", 00:23:08.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.827 "hdgst": ${hdgst:-false}, 00:23:08.827 "ddgst": ${ddgst:-false} 00:23:08.827 }, 00:23:08.827 "method": "bdev_nvme_attach_controller" 00:23:08.827 } 00:23:08.827 EOF 00:23:08.827 )") 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.827 { 00:23:08.827 "params": { 00:23:08.827 "name": "Nvme$subsystem", 00:23:08.827 "trtype": "$TEST_TRANSPORT", 00:23:08.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.827 "adrfam": "ipv4", 00:23:08.827 "trsvcid": "$NVMF_PORT", 00:23:08.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.827 "hdgst": ${hdgst:-false}, 00:23:08.827 "ddgst": ${ddgst:-false} 00:23:08.827 }, 00:23:08.827 "method": "bdev_nvme_attach_controller" 00:23:08.827 } 00:23:08.827 EOF 00:23:08.827 )") 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.827 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.827 { 00:23:08.827 "params": { 00:23:08.827 "name": "Nvme$subsystem", 00:23:08.827 "trtype": "$TEST_TRANSPORT", 00:23:08.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.828 "adrfam": "ipv4", 00:23:08.828 "trsvcid": "$NVMF_PORT", 00:23:08.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.828 "hdgst": ${hdgst:-false}, 00:23:08.828 "ddgst": ${ddgst:-false} 00:23:08.828 }, 00:23:08.828 "method": "bdev_nvme_attach_controller" 00:23:08.828 } 00:23:08.828 EOF 00:23:08.828 )") 00:23:08.828 [2024-11-20 14:43:15.870174] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:08.828 [2024-11-20 14:43:15.870226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3975599 ] 00:23:08.828 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.828 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.828 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.828 { 00:23:08.828 "params": { 00:23:08.828 "name": "Nvme$subsystem", 00:23:08.828 "trtype": "$TEST_TRANSPORT", 00:23:08.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.828 "adrfam": "ipv4", 00:23:08.828 "trsvcid": "$NVMF_PORT", 00:23:08.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.828 "hdgst": ${hdgst:-false}, 00:23:08.828 "ddgst": ${ddgst:-false} 00:23:08.828 }, 00:23:08.828 "method": "bdev_nvme_attach_controller" 00:23:08.828 } 00:23:08.828 EOF 00:23:08.828 )") 00:23:08.828 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.828 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.828 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.828 { 00:23:08.828 "params": { 00:23:08.828 "name": "Nvme$subsystem", 00:23:08.828 "trtype": "$TEST_TRANSPORT", 00:23:08.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.828 "adrfam": "ipv4", 00:23:08.828 "trsvcid": "$NVMF_PORT", 00:23:08.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.828 "hdgst": ${hdgst:-false}, 00:23:08.828 "ddgst": ${ddgst:-false} 00:23:08.828 }, 00:23:08.828 "method": "bdev_nvme_attach_controller" 00:23:08.828 } 00:23:08.828 EOF 00:23:08.828 )") 00:23:08.828 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.828 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.828 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.828 { 00:23:08.828 "params": { 00:23:08.828 "name": "Nvme$subsystem", 00:23:08.828 "trtype": "$TEST_TRANSPORT", 00:23:08.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.828 "adrfam": "ipv4", 00:23:08.828 "trsvcid": "$NVMF_PORT", 00:23:08.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.828 "hdgst": ${hdgst:-false}, 00:23:08.828 "ddgst": ${ddgst:-false} 00:23:08.828 }, 00:23:08.828 "method": "bdev_nvme_attach_controller" 00:23:08.828 } 00:23:08.828 EOF 00:23:08.828 )") 00:23:09.088 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:09.088 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:09.088 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:09.088 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme1", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 },{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme2", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 },{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme3", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 },{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme4", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 },{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme5", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 },{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme6", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 },{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme7", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 },{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme8", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 },{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme9", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 },{ 00:23:09.088 "params": { 00:23:09.088 "name": "Nvme10", 00:23:09.088 "trtype": "tcp", 00:23:09.088 "traddr": "10.0.0.2", 00:23:09.088 "adrfam": "ipv4", 00:23:09.088 "trsvcid": "4420", 00:23:09.088 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:09.088 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:09.088 "hdgst": false, 00:23:09.088 "ddgst": false 00:23:09.088 }, 00:23:09.088 "method": "bdev_nvme_attach_controller" 00:23:09.088 }' 00:23:09.088 [2024-11-20 14:43:15.935627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.088 [2024-11-20 14:43:15.966138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.468 Running I/O for 10 seconds... 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:10.726 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3975217 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3975217 ']' 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3975217 00:23:10.985 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:10.985 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.985 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3975217 00:23:11.260 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.260 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.260 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3975217' 00:23:11.260 killing process with pid 3975217 00:23:11.260 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3975217 00:23:11.260 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3975217 00:23:11.260 [2024-11-20 14:43:18.056800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.260 [2024-11-20 14:43:18.056910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.056998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.057143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30380 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.261 [2024-11-20 14:43:18.058274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.058418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63540 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.262 [2024-11-20 14:43:18.059819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.059824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.059829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.059833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.059839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.059843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.059848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.059854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.059859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.059864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63a10 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.060933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63f00 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.263 [2024-11-20 14:43:18.061573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.061830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d643d0 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.264 [2024-11-20 14:43:18.062631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.062780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64750 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.063999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.265 [2024-11-20 14:43:18.064105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64fa0 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.266 [2024-11-20 14:43:18.064970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.064959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-20 14:43:18.064975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65470 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:11.267 the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.064988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.064997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5f40 is same with the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.065057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8a10 is same with the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.065125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8b20 is same with the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.065192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8940 is same with the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.065263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1610 is same with the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.065333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c97f0 is same with the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.065394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575b00 is same with the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.065455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15823f0 is same with the state(6) to be set 00:23:11.267 [2024-11-20 14:43:18.065527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.267 [2024-11-20 14:43:18.065556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-11-20 14:43:18.065562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.268 [2024-11-20 14:43:18.065567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19af960 is same with the state(6) to be set 00:23:11.268 [2024-11-20 14:43:18.065588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.268 [2024-11-20 14:43:18.065595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.268 [2024-11-20 14:43:18.065606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.268 [2024-11-20 14:43:18.065617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.268 [2024-11-20 14:43:18.065629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158c0b0 is same with the state(6) to be set 00:23:11.268 [2024-11-20 14:43:18.065693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.065990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.065998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.066010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.066024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.066036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.066048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.066060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.066072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.066085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.066097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.268 [2024-11-20 14:43:18.066109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.268 [2024-11-20 14:43:18.066114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.269 [2024-11-20 14:43:18.066763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.269 [2024-11-20 14:43:18.066771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.066994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.066999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.067006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.067012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.067020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.067025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.067032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.067038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.067046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.067051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.067058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.067063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.067070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.067075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.067082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.067087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.067093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.270 [2024-11-20 14:43:18.071842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.270 [2024-11-20 14:43:18.071854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.071864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.071877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.071887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.071899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.071908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.071921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.071931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.071943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.071953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.071965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.071974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.071988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.071998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.072322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.271 [2024-11-20 14:43:18.072332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.271 [2024-11-20 14:43:18.074425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:11.271 [2024-11-20 14:43:18.074459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575b00 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.074749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:11.271 [2024-11-20 14:43:18.074768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1610 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.075370] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.271 [2024-11-20 14:43:18.075434] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.271 [2024-11-20 14:43:18.075809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-11-20 14:43:18.075823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1575b00 with addr=10.0.0.2, port=4420 00:23:11.271 [2024-11-20 14:43:18.075831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575b00 is same with the state(6) to be set 00:23:11.271 [2024-11-20 14:43:18.075849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b5f40 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.075866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c8a10 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.075881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f8b20 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.075896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f8940 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.075910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c97f0 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.075924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15823f0 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.075936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19af960 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.075948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158c0b0 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.075997] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.271 [2024-11-20 14:43:18.076035] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.271 [2024-11-20 14:43:18.076068] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.271 [2024-11-20 14:43:18.076103] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.271 [2024-11-20 14:43:18.076136] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.271 [2024-11-20 14:43:18.076188] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.271 [2024-11-20 14:43:18.076546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-11-20 14:43:18.076579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a1610 with addr=10.0.0.2, port=4420 00:23:11.271 [2024-11-20 14:43:18.076589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1610 is same with the state(6) to be set 00:23:11.271 [2024-11-20 14:43:18.076607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575b00 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.076687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1610 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.076698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:11.271 [2024-11-20 14:43:18.076704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:11.271 [2024-11-20 14:43:18.076712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:11.271 [2024-11-20 14:43:18.076719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:11.271 [2024-11-20 14:43:18.076759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:11.271 [2024-11-20 14:43:18.076766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:11.271 [2024-11-20 14:43:18.076772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:11.271 [2024-11-20 14:43:18.076777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:11.271 [2024-11-20 14:43:18.084868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:11.271 [2024-11-20 14:43:18.085429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.271 [2024-11-20 14:43:18.085463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1575b00 with addr=10.0.0.2, port=4420 00:23:11.271 [2024-11-20 14:43:18.085473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575b00 is same with the state(6) to be set 00:23:11.271 [2024-11-20 14:43:18.085515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575b00 (9): Bad file descriptor 00:23:11.271 [2024-11-20 14:43:18.085612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:11.271 [2024-11-20 14:43:18.085620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:11.271 [2024-11-20 14:43:18.085625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:11.272 [2024-11-20 14:43:18.085631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:11.272 [2024-11-20 14:43:18.085667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.085990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.085997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.272 [2024-11-20 14:43:18.086176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.272 [2024-11-20 14:43:18.086182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.086512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.086519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1790d90 is same with the state(6) to be set 00:23:11.273 [2024-11-20 14:43:18.087412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.273 [2024-11-20 14:43:18.087592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.273 [2024-11-20 14:43:18.087598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.087992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.087997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.088005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.088010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.088017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.088023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.088030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.088036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.274 [2024-11-20 14:43:18.088043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.274 [2024-11-20 14:43:18.088048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.088241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.088299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f8f0 is same with the state(6) to be set 00:23:11.275 [2024-11-20 14:43:18.089194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.275 [2024-11-20 14:43:18.089509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.275 [2024-11-20 14:43:18.089514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.089991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.089996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.090003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.090009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.090016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.090022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.276 [2024-11-20 14:43:18.090030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.276 [2024-11-20 14:43:18.090036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.090043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198c800 is same with the state(6) to be set 00:23:11.277 [2024-11-20 14:43:18.090927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.090937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.090946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.090952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.090960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.090965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.090973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.090980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.090987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.090994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.277 [2024-11-20 14:43:18.091436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.277 [2024-11-20 14:43:18.091442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.091649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.091656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.095460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.095506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.095519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.095533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.095544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.095558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.095567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.095580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.095590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.095602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.095612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.095625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.095641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.095654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.095664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.095677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.095687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.095698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dac0 is same with the state(6) to be set 00:23:11.278 [2024-11-20 14:43:18.097429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.097441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.097451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.097457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.097464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.097470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.097477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.097483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.097490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.097495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.097502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.097507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.097515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.097520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.097528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.097534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.278 [2024-11-20 14:43:18.097541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.278 [2024-11-20 14:43:18.097546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.097990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.097997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.098003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.098010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.098016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.098025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.098032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.098039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.098045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.098054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.098059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.279 [2024-11-20 14:43:18.098067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.279 [2024-11-20 14:43:18.098073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.098269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.098275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ed80 is same with the state(6) to be set 00:23:11.280 [2024-11-20 14:43:18.099171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.280 [2024-11-20 14:43:18.099485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.280 [2024-11-20 14:43:18.099490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.099993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.281 [2024-11-20 14:43:18.099998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.281 [2024-11-20 14:43:18.100005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.100012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.100017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991300 is same with the state(6) to be set 00:23:11.282 [2024-11-20 14:43:18.100897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.100906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.100914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.100920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.100927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.100933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.100940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.100945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.100952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.100958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.100965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.100975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.100982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.100988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.100995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.282 [2024-11-20 14:43:18.101394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.282 [2024-11-20 14:43:18.101401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.101723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.101730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19925c0 is same with the state(6) to be set 00:23:11.283 [2024-11-20 14:43:18.102642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.102650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.102659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.102665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.102672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.102679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.102686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.102691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.102701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.102706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.102713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.102719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.102725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.102732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.102740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.102745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.283 [2024-11-20 14:43:18.102753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.283 [2024-11-20 14:43:18.102758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.102990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.102997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.284 [2024-11-20 14:43:18.103268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.284 [2024-11-20 14:43:18.103275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.285 [2024-11-20 14:43:18.103461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.285 [2024-11-20 14:43:18.103466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1993820 is same with the state(6) to be set 00:23:11.285 [2024-11-20 14:43:18.104580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:11.285 [2024-11-20 14:43:18.104604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:11.285 [2024-11-20 14:43:18.104615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:11.285 [2024-11-20 14:43:18.104625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:11.285 [2024-11-20 14:43:18.104689] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:11.285 [2024-11-20 14:43:18.104705] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:11.285 [2024-11-20 14:43:18.104714] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:11.285 [2024-11-20 14:43:18.104722] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:11.285 [2024-11-20 14:43:18.104795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:11.285 [2024-11-20 14:43:18.104805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:11.285 [2024-11-20 14:43:18.104814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:11.285 task offset: 25600 on job bdev=Nvme1n1 fails 00:23:11.285 00:23:11.285 Latency(us) 00:23:11.285 [2024-11-20T13:43:18.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme1n1 ended in about 0.68 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme1n1 : 0.68 283.90 17.74 94.63 0.00 166410.03 14308.69 200977.07 00:23:11.285 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme2n1 ended in about 0.69 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme2n1 : 0.69 285.41 17.84 92.72 0.00 161959.40 8792.75 200103.25 00:23:11.285 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme3n1 ended in about 0.69 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme3n1 : 0.69 184.97 11.56 92.48 0.00 214436.98 17476.27 189617.49 00:23:11.285 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme4n1 ended in about 0.69 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme4n1 : 0.69 184.50 11.53 92.25 0.00 208541.30 15837.87 175636.48 00:23:11.285 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme5n1 ended in about 0.70 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme5n1 : 0.70 182.95 11.43 91.47 0.00 204047.93 14308.69 175636.48 00:23:11.285 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme6n1 ended in about 0.70 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme6n1 : 0.70 182.34 11.40 91.17 0.00 198442.10 28180.48 207093.76 00:23:11.285 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme7n1 ended in about 0.68 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme7n1 : 0.68 283.50 17.72 94.50 0.00 137777.39 8847.36 173015.04 00:23:11.285 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme8n1 ended in about 0.70 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme8n1 : 0.70 187.58 11.72 90.95 0.00 182332.74 14090.24 161655.47 00:23:11.285 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme9n1 ended in about 0.71 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme9n1 : 0.71 181.45 11.34 90.73 0.00 180369.92 14090.24 196608.00 00:23:11.285 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.285 Job: Nvme10n1 ended in about 0.71 seconds with error 00:23:11.285 Verification LBA range: start 0x0 length 0x400 00:23:11.285 Nvme10n1 : 0.71 181.01 11.31 90.50 0.00 174634.10 14527.15 177384.11 00:23:11.285 [2024-11-20T13:43:18.345Z] =================================================================================================================== 00:23:11.285 [2024-11-20T13:43:18.345Z] Total : 2137.61 133.60 921.41 0.00 180354.21 8792.75 207093.76 00:23:11.285 [2024-11-20 14:43:18.125810] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:11.285 [2024-11-20 14:43:18.125854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:11.285 [2024-11-20 14:43:18.126300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.285 [2024-11-20 14:43:18.126315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15823f0 with addr=10.0.0.2, port=4420 00:23:11.285 [2024-11-20 14:43:18.126323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15823f0 is same with the state(6) to be set 00:23:11.285 [2024-11-20 14:43:18.126662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.285 [2024-11-20 14:43:18.126670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x158c0b0 with addr=10.0.0.2, port=4420 00:23:11.285 [2024-11-20 14:43:18.126675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158c0b0 is same with the state(6) to be set 00:23:11.285 [2024-11-20 14:43:18.126879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.285 [2024-11-20 14:43:18.126887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b5f40 with addr=10.0.0.2, port=4420 00:23:11.285 [2024-11-20 14:43:18.126892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5f40 is same with the state(6) to be set 00:23:11.285 [2024-11-20 14:43:18.127089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.285 [2024-11-20 14:43:18.127101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19af960 with addr=10.0.0.2, port=4420 00:23:11.285 [2024-11-20 14:43:18.127106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19af960 is same with the state(6) to be set 00:23:11.286 [2024-11-20 14:43:18.129079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.286 [2024-11-20 14:43:18.129092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c97f0 with addr=10.0.0.2, port=4420 00:23:11.286 [2024-11-20 14:43:18.129098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c97f0 is same with the state(6) to be set 00:23:11.286 [2024-11-20 14:43:18.129416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.286 [2024-11-20 14:43:18.129425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c8a10 with addr=10.0.0.2, port=4420 00:23:11.286 [2024-11-20 14:43:18.129431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c8a10 is same with the state(6) to be set 00:23:11.286 [2024-11-20 14:43:18.129755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.286 [2024-11-20 14:43:18.129763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f8940 with addr=10.0.0.2, port=4420 00:23:11.286 [2024-11-20 14:43:18.129768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8940 is same with the state(6) to be set 00:23:11.286 [2024-11-20 14:43:18.130067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.286 [2024-11-20 14:43:18.130075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f8b20 with addr=10.0.0.2, port=4420 00:23:11.286 [2024-11-20 14:43:18.130081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8b20 is same with the state(6) to be set 00:23:11.286 [2024-11-20 14:43:18.130091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15823f0 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.130101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158c0b0 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.130107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b5f40 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.130118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19af960 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.130149] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:11.286 [2024-11-20 14:43:18.130159] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:11.286 [2024-11-20 14:43:18.130172] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:11.286 [2024-11-20 14:43:18.130181] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:11.286 [2024-11-20 14:43:18.130190] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:11.286 [2024-11-20 14:43:18.130200] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:11.286 [2024-11-20 14:43:18.130413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:11.286 [2024-11-20 14:43:18.130425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:11.286 [2024-11-20 14:43:18.130454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c97f0 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.130463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c8a10 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.130471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f8940 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.130477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f8b20 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.130484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.130488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.130495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.130501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:11.286 [2024-11-20 14:43:18.130508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.130513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.130518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.130523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:11.286 [2024-11-20 14:43:18.130529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.130534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.130539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.130543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:11.286 [2024-11-20 14:43:18.130549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.130553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.130562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.130567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:11.286 [2024-11-20 14:43:18.130960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.286 [2024-11-20 14:43:18.130971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a1610 with addr=10.0.0.2, port=4420 00:23:11.286 [2024-11-20 14:43:18.130978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1610 is same with the state(6) to be set 00:23:11.286 [2024-11-20 14:43:18.131191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.286 [2024-11-20 14:43:18.131200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1575b00 with addr=10.0.0.2, port=4420 00:23:11.286 [2024-11-20 14:43:18.131205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575b00 is same with the state(6) to be set 00:23:11.286 [2024-11-20 14:43:18.131210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.131215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.131220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.131225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:11.286 [2024-11-20 14:43:18.131231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.131235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.131241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.131250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:11.286 [2024-11-20 14:43:18.131256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.131261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.131266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.131271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:11.286 [2024-11-20 14:43:18.131276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.131281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.131286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.131290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:11.286 [2024-11-20 14:43:18.131312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1610 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.131321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575b00 (9): Bad file descriptor 00:23:11.286 [2024-11-20 14:43:18.131341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.131346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.131351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.131358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:11.286 [2024-11-20 14:43:18.131364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:11.286 [2024-11-20 14:43:18.131369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:11.286 [2024-11-20 14:43:18.131374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:11.286 [2024-11-20 14:43:18.131379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:11.547 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3975599 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3975599 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3975599 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:12.484 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.485 rmmod nvme_tcp 00:23:12.485 rmmod nvme_fabrics 00:23:12.485 rmmod nvme_keyring 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3975217 ']' 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3975217 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3975217 ']' 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3975217 00:23:12.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3975217) - No such process 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3975217 is not found' 00:23:12.485 Process with pid 3975217 is not found 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.485 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.390 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.390 00:23:14.390 real 0m7.159s 00:23:14.390 user 0m16.978s 00:23:14.390 sys 0m0.883s 00:23:14.390 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.390 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.390 ************************************ 00:23:14.390 END TEST nvmf_shutdown_tc3 00:23:14.390 ************************************ 00:23:14.390 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:14.390 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:14.390 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:14.390 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:14.390 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.390 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:14.650 ************************************ 00:23:14.650 START TEST nvmf_shutdown_tc4 00:23:14.650 ************************************ 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:14.650 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:14.651 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:14.651 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:14.651 Found net devices under 0000:31:00.0: cvl_0_0 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:14.651 Found net devices under 0000:31:00.1: cvl_0_1 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.651 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.652 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:23:14.912 00:23:14.912 --- 10.0.0.2 ping statistics --- 00:23:14.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.912 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:23:14.912 00:23:14.912 --- 10.0.0.1 ping statistics --- 00:23:14.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.912 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3976905 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3976905 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3976905 ']' 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:14.912 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:14.913 [2024-11-20 14:43:21.798658] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:14.913 [2024-11-20 14:43:21.798722] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.913 [2024-11-20 14:43:21.879360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.913 [2024-11-20 14:43:21.918261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.913 [2024-11-20 14:43:21.918297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.913 [2024-11-20 14:43:21.918303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.913 [2024-11-20 14:43:21.918308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.913 [2024-11-20 14:43:21.918313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.913 [2024-11-20 14:43:21.919807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.913 [2024-11-20 14:43:21.919965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.913 [2024-11-20 14:43:21.920119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.913 [2024-11-20 14:43:21.920120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.852 [2024-11-20 14:43:22.617117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.852 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.852 Malloc1 00:23:15.852 [2024-11-20 14:43:22.700028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.852 Malloc2 00:23:15.852 Malloc3 00:23:15.852 Malloc4 00:23:15.852 Malloc5 00:23:15.852 Malloc6 00:23:15.852 Malloc7 00:23:16.112 Malloc8 00:23:16.112 Malloc9 00:23:16.112 Malloc10 00:23:16.112 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.112 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:16.112 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.112 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:16.112 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3977139 00:23:16.112 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:16.112 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:16.112 [2024-11-20 14:43:23.118652] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3976905 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3976905 ']' 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3976905 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3976905 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3976905' 00:23:21.394 killing process with pid 3976905 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3976905 00:23:21.394 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3976905 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.136342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73a50 is same with Write completed with error (sct=0, sc=8) 00:23:21.394 the state(6) to be set 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.136384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73a50 is same with the state(6) to be set 00:23:21.394 [2024-11-20 14:43:28.136390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73a50 is same with the state(6) to be set 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.136564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.394 [2024-11-20 14:43:28.136594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73f20 is same with the state(6) to be set 00:23:21.394 [2024-11-20 14:43:28.136618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73f20 is same with the state(6) to be set 00:23:21.394 [2024-11-20 14:43:28.136624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73f20 is same with the state(6) to be set 00:23:21.394 [2024-11-20 14:43:28.136629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73f20 is same with the state(6) to be set 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.136848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with Write completed with error (sct=0, sc=8) 00:23:21.394 the state(6) to be set 00:23:21.394 starting I/O failed: -6 00:23:21.394 [2024-11-20 14:43:28.136871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with Write completed with error (sct=0, sc=8) 00:23:21.394 the state(6) to be set 00:23:21.394 starting I/O failed: -6 00:23:21.394 [2024-11-20 14:43:28.136879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with the state(6) to be set 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.136886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with the state(6) to be set 00:23:21.394 [2024-11-20 14:43:28.136893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with the state(6) to be set 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.136898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with the state(6) to be set 00:23:21.394 [2024-11-20 14:43:28.136903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with the state(6) to be set 00:23:21.394 [2024-11-20 14:43:28.136908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with the state(6) to be set 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.136913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with the state(6) to be set 00:23:21.394 starting I/O failed: -6 00:23:21.394 [2024-11-20 14:43:28.136918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a743f0 is same with the state(6) to be set 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 starting I/O failed: -6 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.137154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73560 is same with starting I/O failed: -6 00:23:21.394 the state(6) to be set 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.137176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73560 is same with the state(6) to be set 00:23:21.394 [2024-11-20 14:43:28.137182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73560 is same with the state(6) to be set 00:23:21.394 Write completed with error (sct=0, sc=8) 00:23:21.394 [2024-11-20 14:43:28.137188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73560 is same with the state(6) to be set 00:23:21.395 [2024-11-20 14:43:28.137193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73560 is same with the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 [2024-11-20 14:43:28.137296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 [2024-11-20 14:43:28.137510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5be60 is same with the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 [2024-11-20 14:43:28.137523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5be60 is same with starting I/O failed: -6 00:23:21.395 the state(6) to be set 00:23:21.395 [2024-11-20 14:43:28.137529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5be60 is same with the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 [2024-11-20 14:43:28.137535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5be60 is same with the state(6) to be set 00:23:21.395 [2024-11-20 14:43:28.137540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5be60 is same with the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 [2024-11-20 14:43:28.137545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5be60 is same with the state(6) to be set 00:23:21.395 starting I/O failed: -6 00:23:21.395 [2024-11-20 14:43:28.137550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5be60 is same with the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 [2024-11-20 14:43:28.137555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5be60 is same with the state(6) to be set 00:23:21.395 starting I/O failed: -6 00:23:21.395 [2024-11-20 14:43:28.137560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5be60 is same with the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 [2024-11-20 14:43:28.137831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c350 is same with starting I/O failed: -6 00:23:21.395 the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 [2024-11-20 14:43:28.137846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c350 is same with the state(6) to be set 00:23:21.395 starting I/O failed: -6 00:23:21.395 [2024-11-20 14:43:28.137852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c350 is same with the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 [2024-11-20 14:43:28.137857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c350 is same with the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 [2024-11-20 14:43:28.137979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.395 [2024-11-20 14:43:28.138040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c840 is same with the state(6) to be set 00:23:21.395 [2024-11-20 14:43:28.138052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c840 is same with the state(6) to be set 00:23:21.395 [2024-11-20 14:43:28.138057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c840 is same with the state(6) to be set 00:23:21.395 [2024-11-20 14:43:28.138063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c840 is same with the state(6) to be set 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 [2024-11-20 14:43:28.138068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c840 is same with the state(6) to be set 00:23:21.395 starting I/O failed: -6 00:23:21.395 [2024-11-20 14:43:28.138073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c840 is same with the state(6) to be set 00:23:21.395 [2024-11-20 14:43:28.138078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c840 is same with Write completed with error (sct=0, sc=8) 00:23:21.395 the state(6) to be set 00:23:21.395 starting I/O failed: -6 00:23:21.395 [2024-11-20 14:43:28.138086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c840 is same with Write completed with error (sct=0, sc=8) 00:23:21.395 the state(6) to be set 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.395 Write completed with error (sct=0, sc=8) 00:23:21.395 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.138284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5b990 is same with the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.138300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5b990 is same with the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 [2024-11-20 14:43:28.138305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5b990 is same with the state(6) to be set 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.138311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5b990 is same with the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 [2024-11-20 14:43:28.138316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5b990 is same with the state(6) to be set 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.138321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5b990 is same with the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 [2024-11-20 14:43:28.138326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5b990 is same with the state(6) to be set 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.138332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5b990 is same with the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 [2024-11-20 14:43:28.138536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d200 is same with starting I/O failed: -6 00:23:21.396 the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 [2024-11-20 14:43:28.138551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d200 is same with starting I/O failed: -6 00:23:21.396 the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 [2024-11-20 14:43:28.138562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d200 is same with the state(6) to be set 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.138567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d200 is same with the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 [2024-11-20 14:43:28.138574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d200 is same with the state(6) to be set 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.138579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d200 is same with the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.138797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d6f0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.138807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d6f0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.138812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d6f0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.138817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5d6f0 is same with the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.139054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.396 NVMe io qpair process completion error 00:23:21.396 [2024-11-20 14:43:28.139082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5dbe0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5dbe0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5dbe0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5dbe0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5dbe0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5dbe0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5dbe0 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd30 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd30 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd30 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd30 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd30 is same with the state(6) to be set 00:23:21.396 [2024-11-20 14:43:28.139360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd30 is same with the state(6) to be set 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 Write completed with error (sct=0, sc=8) 00:23:21.396 starting I/O failed: -6 00:23:21.396 [2024-11-20 14:43:28.139864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 [2024-11-20 14:43:28.140546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 [2024-11-20 14:43:28.141174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.397 starting I/O failed: -6 00:23:21.397 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 [2024-11-20 14:43:28.142358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.398 NVMe io qpair process completion error 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 [2024-11-20 14:43:28.143529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 [2024-11-20 14:43:28.144086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.398 starting I/O failed: -6 00:23:21.398 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 [2024-11-20 14:43:28.145685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.399 NVMe io qpair process completion error 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 starting I/O failed: -6 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.399 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 [2024-11-20 14:43:28.146444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 [2024-11-20 14:43:28.147122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 [2024-11-20 14:43:28.147805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.400 Write completed with error (sct=0, sc=8) 00:23:21.400 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 [2024-11-20 14:43:28.149133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.401 NVMe io qpair process completion error 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 [2024-11-20 14:43:28.150098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 starting I/O failed: -6 00:23:21.401 Write completed with error (sct=0, sc=8) 00:23:21.401 [2024-11-20 14:43:28.150692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 [2024-11-20 14:43:28.151394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 Write completed with error (sct=0, sc=8) 00:23:21.402 starting I/O failed: -6 00:23:21.402 [2024-11-20 14:43:28.153424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.403 NVMe io qpair process completion error 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 [2024-11-20 14:43:28.154324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 [2024-11-20 14:43:28.154974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.403 starting I/O failed: -6 00:23:21.403 Write completed with error (sct=0, sc=8) 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 [2024-11-20 14:43:28.155660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 [2024-11-20 14:43:28.158137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.404 NVMe io qpair process completion error 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 starting I/O failed: -6 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.404 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 [2024-11-20 14:43:28.159236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 [2024-11-20 14:43:28.160374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.405 starting I/O failed: -6 00:23:21.405 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 [2024-11-20 14:43:28.161525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.406 NVMe io qpair process completion error 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 [2024-11-20 14:43:28.162389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.406 Write completed with error (sct=0, sc=8) 00:23:21.406 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 [2024-11-20 14:43:28.162970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 [2024-11-20 14:43:28.163669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.407 Write completed with error (sct=0, sc=8) 00:23:21.407 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 [2024-11-20 14:43:28.165231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.408 NVMe io qpair process completion error 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 [2024-11-20 14:43:28.166033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.408 starting I/O failed: -6 00:23:21.408 starting I/O failed: -6 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 starting I/O failed: -6 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.408 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 [2024-11-20 14:43:28.166725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 [2024-11-20 14:43:28.167429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.409 starting I/O failed: -6 00:23:21.409 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 [2024-11-20 14:43:28.169923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.410 NVMe io qpair process completion error 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 [2024-11-20 14:43:28.170869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.410 starting I/O failed: -6 00:23:21.410 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 [2024-11-20 14:43:28.171478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 [2024-11-20 14:43:28.172192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.411 Write completed with error (sct=0, sc=8) 00:23:21.411 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 Write completed with error (sct=0, sc=8) 00:23:21.412 starting I/O failed: -6 00:23:21.412 [2024-11-20 14:43:28.173742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.412 NVMe io qpair process completion error 00:23:21.412 Initializing NVMe Controllers 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:21.412 Controller IO queue size 128, less than required. 00:23:21.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:21.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:21.412 Initialization complete. Launching workers. 00:23:21.412 ======================================================== 00:23:21.412 Latency(us) 00:23:21.412 Device Information : IOPS MiB/s Average min max 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2654.64 114.07 48226.87 429.78 83948.74 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2602.50 111.83 49202.33 604.89 85215.35 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2618.33 112.51 48918.42 494.52 99760.52 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2619.18 112.54 48926.90 608.79 98327.41 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2568.93 110.38 49499.04 643.56 99302.59 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2562.38 110.10 49633.94 659.67 80681.40 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2565.34 110.23 49588.56 568.72 82263.61 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2577.58 110.76 49359.89 639.43 83448.33 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2576.11 110.69 49400.76 615.96 85249.03 00:23:21.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2639.44 113.41 48234.61 661.32 87744.34 00:23:21.412 ======================================================== 00:23:21.412 Total : 25984.43 1116.52 49093.33 429.78 99760.52 00:23:21.412 00:23:21.412 [2024-11-20 14:43:28.176190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf9f0 is same with the state(6) to be set 00:23:21.412 [2024-11-20 14:43:28.176224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf060 is same with the state(6) to be set 00:23:21.412 [2024-11-20 14:43:28.176253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf390 is same with the state(6) to be set 00:23:21.412 [2024-11-20 14:43:28.176276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf6c0 is same with the state(6) to be set 00:23:21.412 [2024-11-20 14:43:28.176297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd1540 is same with the state(6) to be set 00:23:21.412 [2024-11-20 14:43:28.176318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd1360 is same with the state(6) to be set 00:23:21.412 [2024-11-20 14:43:28.176341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd0380 is same with the state(6) to be set 00:23:21.412 [2024-11-20 14:43:28.176363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:23:21.412 [2024-11-20 14:43:28.176384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd09e0 is same with the state(6) to be set 00:23:21.412 [2024-11-20 14:43:28.176408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd0050 is same with the state(6) to be set 00:23:21.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:21.412 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3977139 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3977139 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3977139 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.352 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.352 rmmod nvme_tcp 00:23:22.352 rmmod nvme_fabrics 00:23:22.610 rmmod nvme_keyring 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3976905 ']' 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3976905 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3976905 ']' 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3976905 00:23:22.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3976905) - No such process 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3976905 is not found' 00:23:22.610 Process with pid 3976905 is not found 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.610 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.514 00:23:24.514 real 0m10.037s 00:23:24.514 user 0m27.209s 00:23:24.514 sys 0m3.994s 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:24.514 ************************************ 00:23:24.514 END TEST nvmf_shutdown_tc4 00:23:24.514 ************************************ 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:24.514 00:23:24.514 real 0m39.108s 00:23:24.514 user 1m37.807s 00:23:24.514 sys 0m11.129s 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:24.514 ************************************ 00:23:24.514 END TEST nvmf_shutdown 00:23:24.514 ************************************ 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.514 14:43:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:24.775 ************************************ 00:23:24.775 START TEST nvmf_nsid 00:23:24.775 ************************************ 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:24.775 * Looking for test storage... 00:23:24.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:24.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.775 --rc genhtml_branch_coverage=1 00:23:24.775 --rc genhtml_function_coverage=1 00:23:24.775 --rc genhtml_legend=1 00:23:24.775 --rc geninfo_all_blocks=1 00:23:24.775 --rc geninfo_unexecuted_blocks=1 00:23:24.775 00:23:24.775 ' 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:24.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.775 --rc genhtml_branch_coverage=1 00:23:24.775 --rc genhtml_function_coverage=1 00:23:24.775 --rc genhtml_legend=1 00:23:24.775 --rc geninfo_all_blocks=1 00:23:24.775 --rc geninfo_unexecuted_blocks=1 00:23:24.775 00:23:24.775 ' 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:24.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.775 --rc genhtml_branch_coverage=1 00:23:24.775 --rc genhtml_function_coverage=1 00:23:24.775 --rc genhtml_legend=1 00:23:24.775 --rc geninfo_all_blocks=1 00:23:24.775 --rc geninfo_unexecuted_blocks=1 00:23:24.775 00:23:24.775 ' 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:24.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.775 --rc genhtml_branch_coverage=1 00:23:24.775 --rc genhtml_function_coverage=1 00:23:24.775 --rc genhtml_legend=1 00:23:24.775 --rc geninfo_all_blocks=1 00:23:24.775 --rc geninfo_unexecuted_blocks=1 00:23:24.775 00:23:24.775 ' 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.775 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.776 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:30.051 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:30.051 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:30.051 Found net devices under 0000:31:00.0: cvl_0_0 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:30.051 Found net devices under 0000:31:00.1: cvl_0_1 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:30.051 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.052 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.052 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:30.052 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:30.052 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.052 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.052 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.052 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.052 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:30.052 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:30.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:23:30.052 00:23:30.052 --- 10.0.0.2 ping statistics --- 00:23:30.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.052 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:23:30.052 00:23:30.052 --- 10.0.0.1 ping statistics --- 00:23:30.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.052 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:30.052 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3982947 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3982947 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3982947 ']' 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.312 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:30.312 [2024-11-20 14:43:37.158260] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:30.312 [2024-11-20 14:43:37.158326] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.312 [2024-11-20 14:43:37.249129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.312 [2024-11-20 14:43:37.300657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.312 [2024-11-20 14:43:37.300707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.312 [2024-11-20 14:43:37.300716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.312 [2024-11-20 14:43:37.300723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.312 [2024-11-20 14:43:37.300730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.312 [2024-11-20 14:43:37.301549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.250 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3983152 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=38a49851-a509-4884-9378-1723129f95d4 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0e368d40-7c6c-4a80-80eb-4da3904090ce 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=fa2c7026-c2e1-4744-a143-8d17319e6732 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.251 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:31.251 null0 00:23:31.251 null1 00:23:31.251 [2024-11-20 14:43:38.009230] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:31.251 [2024-11-20 14:43:38.009288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983152 ] 00:23:31.251 null2 00:23:31.251 [2024-11-20 14:43:38.013858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.251 [2024-11-20 14:43:38.038057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3983152 /var/tmp/tgt2.sock 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3983152 ']' 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:31.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:31.251 [2024-11-20 14:43:38.086341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.251 [2024-11-20 14:43:38.123411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:31.251 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:31.819 [2024-11-20 14:43:38.582403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.819 [2024-11-20 14:43:38.598575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:31.819 nvme0n1 nvme0n2 00:23:31.819 nvme1n1 00:23:31.819 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:31.819 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:31.819 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:33.196 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 38a49851-a509-4884-9378-1723129f95d4 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=38a49851a509488493781723129f95d4 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 38A49851A509488493781723129F95D4 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 38A49851A509488493781723129F95D4 == \3\8\A\4\9\8\5\1\A\5\0\9\4\8\8\4\9\3\7\8\1\7\2\3\1\2\9\F\9\5\D\4 ]] 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0e368d40-7c6c-4a80-80eb-4da3904090ce 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0e368d407c6c4a8080eb4da3904090ce 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0E368D407C6C4A8080EB4DA3904090CE 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0E368D407C6C4A8080EB4DA3904090CE == \0\E\3\6\8\D\4\0\7\C\6\C\4\A\8\0\8\0\E\B\4\D\A\3\9\0\4\0\9\0\C\E ]] 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid fa2c7026-c2e1-4744-a143-8d17319e6732 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fa2c7026c2e14744a1438d17319e6732 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FA2C7026C2E14744A1438D17319E6732 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FA2C7026C2E14744A1438D17319E6732 == \F\A\2\C\7\0\2\6\C\2\E\1\4\7\4\4\A\1\4\3\8\D\1\7\3\1\9\E\6\7\3\2 ]] 00:23:34.134 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3983152 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3983152 ']' 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3983152 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3983152 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3983152' 00:23:34.393 killing process with pid 3983152 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3983152 00:23:34.393 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3983152 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:34.652 rmmod nvme_tcp 00:23:34.652 rmmod nvme_fabrics 00:23:34.652 rmmod nvme_keyring 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3982947 ']' 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3982947 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3982947 ']' 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3982947 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3982947 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3982947' 00:23:34.652 killing process with pid 3982947 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3982947 00:23:34.652 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3982947 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.912 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.913 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.815 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.815 00:23:36.815 real 0m12.259s 00:23:36.815 user 0m9.775s 00:23:36.815 sys 0m4.953s 00:23:36.815 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.815 14:43:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:36.815 ************************************ 00:23:36.815 END TEST nvmf_nsid 00:23:36.815 ************************************ 00:23:36.815 14:43:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:36.815 00:23:36.815 real 11m27.567s 00:23:36.815 user 25m4.357s 00:23:36.815 sys 3m5.239s 00:23:36.815 14:43:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.815 14:43:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:36.815 ************************************ 00:23:36.815 END TEST nvmf_target_extra 00:23:36.815 ************************************ 00:23:37.075 14:43:43 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:37.075 14:43:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:37.075 14:43:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:37.075 14:43:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:37.075 ************************************ 00:23:37.075 START TEST nvmf_host 00:23:37.075 ************************************ 00:23:37.075 14:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:37.075 * Looking for test storage... 00:23:37.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:37.075 14:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:37.075 14:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:37.075 14:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:37.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.075 --rc genhtml_branch_coverage=1 00:23:37.075 --rc genhtml_function_coverage=1 00:23:37.075 --rc genhtml_legend=1 00:23:37.075 --rc geninfo_all_blocks=1 00:23:37.075 --rc geninfo_unexecuted_blocks=1 00:23:37.075 00:23:37.075 ' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:37.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.075 --rc genhtml_branch_coverage=1 00:23:37.075 --rc genhtml_function_coverage=1 00:23:37.075 --rc genhtml_legend=1 00:23:37.075 --rc geninfo_all_blocks=1 00:23:37.075 --rc geninfo_unexecuted_blocks=1 00:23:37.075 00:23:37.075 ' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:37.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.075 --rc genhtml_branch_coverage=1 00:23:37.075 --rc genhtml_function_coverage=1 00:23:37.075 --rc genhtml_legend=1 00:23:37.075 --rc geninfo_all_blocks=1 00:23:37.075 --rc geninfo_unexecuted_blocks=1 00:23:37.075 00:23:37.075 ' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:37.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.075 --rc genhtml_branch_coverage=1 00:23:37.075 --rc genhtml_function_coverage=1 00:23:37.075 --rc genhtml_legend=1 00:23:37.075 --rc geninfo_all_blocks=1 00:23:37.075 --rc geninfo_unexecuted_blocks=1 00:23:37.075 00:23:37.075 ' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.075 ************************************ 00:23:37.075 START TEST nvmf_multicontroller 00:23:37.075 ************************************ 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:37.075 * Looking for test storage... 00:23:37.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:37.075 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:37.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.336 --rc genhtml_branch_coverage=1 00:23:37.336 --rc genhtml_function_coverage=1 00:23:37.336 --rc genhtml_legend=1 00:23:37.336 --rc geninfo_all_blocks=1 00:23:37.336 --rc geninfo_unexecuted_blocks=1 00:23:37.336 00:23:37.336 ' 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:37.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.336 --rc genhtml_branch_coverage=1 00:23:37.336 --rc genhtml_function_coverage=1 00:23:37.336 --rc genhtml_legend=1 00:23:37.336 --rc geninfo_all_blocks=1 00:23:37.336 --rc geninfo_unexecuted_blocks=1 00:23:37.336 00:23:37.336 ' 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:37.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.336 --rc genhtml_branch_coverage=1 00:23:37.336 --rc genhtml_function_coverage=1 00:23:37.336 --rc genhtml_legend=1 00:23:37.336 --rc geninfo_all_blocks=1 00:23:37.336 --rc geninfo_unexecuted_blocks=1 00:23:37.336 00:23:37.336 ' 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:37.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.336 --rc genhtml_branch_coverage=1 00:23:37.336 --rc genhtml_function_coverage=1 00:23:37.336 --rc genhtml_legend=1 00:23:37.336 --rc geninfo_all_blocks=1 00:23:37.336 --rc geninfo_unexecuted_blocks=1 00:23:37.336 00:23:37.336 ' 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.336 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:37.337 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.611 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:42.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:42.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:42.612 Found net devices under 0000:31:00.0: cvl_0_0 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:42.612 Found net devices under 0000:31:00.1: cvl_0_1 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:23:42.612 00:23:42.612 --- 10.0.0.2 ping statistics --- 00:23:42.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.612 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:23:42.612 00:23:42.612 --- 10.0.0.1 ping statistics --- 00:23:42.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.612 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3988460 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3988460 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3988460 ']' 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.612 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:42.871 [2024-11-20 14:43:49.681689] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:42.871 [2024-11-20 14:43:49.681740] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.871 [2024-11-20 14:43:49.752464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:42.871 [2024-11-20 14:43:49.782212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.871 [2024-11-20 14:43:49.782240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.871 [2024-11-20 14:43:49.782251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.871 [2024-11-20 14:43:49.782256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.871 [2024-11-20 14:43:49.782260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.871 [2024-11-20 14:43:49.783289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.871 [2024-11-20 14:43:49.783425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.871 [2024-11-20 14:43:49.783427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.871 [2024-11-20 14:43:49.890703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.871 Malloc0 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.871 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.132 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.133 [2024-11-20 14:43:49.937995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.133 [2024-11-20 14:43:49.945911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.133 Malloc1 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3988608 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3988608 /var/tmp/bdevperf.sock 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3988608 ']' 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.133 14:43:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.394 NVMe0n1 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.394 1 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.394 request: 00:23:43.394 { 00:23:43.394 "name": "NVMe0", 00:23:43.394 "trtype": "tcp", 00:23:43.394 "traddr": "10.0.0.2", 00:23:43.394 "adrfam": "ipv4", 00:23:43.394 "trsvcid": "4420", 00:23:43.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.394 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:43.394 "hostaddr": "10.0.0.1", 00:23:43.394 "prchk_reftag": false, 00:23:43.394 "prchk_guard": false, 00:23:43.394 "hdgst": false, 00:23:43.394 "ddgst": false, 00:23:43.394 "allow_unrecognized_csi": false, 00:23:43.394 "method": "bdev_nvme_attach_controller", 00:23:43.394 "req_id": 1 00:23:43.394 } 00:23:43.394 Got JSON-RPC error response 00:23:43.394 response: 00:23:43.394 { 00:23:43.394 "code": -114, 00:23:43.394 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:43.394 } 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.394 request: 00:23:43.394 { 00:23:43.394 "name": "NVMe0", 00:23:43.394 "trtype": "tcp", 00:23:43.394 "traddr": "10.0.0.2", 00:23:43.394 "adrfam": "ipv4", 00:23:43.394 "trsvcid": "4420", 00:23:43.394 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:43.394 "hostaddr": "10.0.0.1", 00:23:43.394 "prchk_reftag": false, 00:23:43.394 "prchk_guard": false, 00:23:43.394 "hdgst": false, 00:23:43.394 "ddgst": false, 00:23:43.394 "allow_unrecognized_csi": false, 00:23:43.394 "method": "bdev_nvme_attach_controller", 00:23:43.394 "req_id": 1 00:23:43.394 } 00:23:43.394 Got JSON-RPC error response 00:23:43.394 response: 00:23:43.394 { 00:23:43.394 "code": -114, 00:23:43.394 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:43.394 } 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.394 request: 00:23:43.394 { 00:23:43.394 "name": "NVMe0", 00:23:43.394 "trtype": "tcp", 00:23:43.394 "traddr": "10.0.0.2", 00:23:43.394 "adrfam": "ipv4", 00:23:43.394 "trsvcid": "4420", 00:23:43.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.394 "hostaddr": "10.0.0.1", 00:23:43.394 "prchk_reftag": false, 00:23:43.394 "prchk_guard": false, 00:23:43.394 "hdgst": false, 00:23:43.394 "ddgst": false, 00:23:43.394 "multipath": "disable", 00:23:43.394 "allow_unrecognized_csi": false, 00:23:43.394 "method": "bdev_nvme_attach_controller", 00:23:43.394 "req_id": 1 00:23:43.394 } 00:23:43.394 Got JSON-RPC error response 00:23:43.394 response: 00:23:43.394 { 00:23:43.394 "code": -114, 00:23:43.394 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:43.394 } 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:43.394 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.395 request: 00:23:43.395 { 00:23:43.395 "name": "NVMe0", 00:23:43.395 "trtype": "tcp", 00:23:43.395 "traddr": "10.0.0.2", 00:23:43.395 "adrfam": "ipv4", 00:23:43.395 "trsvcid": "4420", 00:23:43.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.395 "hostaddr": "10.0.0.1", 00:23:43.395 "prchk_reftag": false, 00:23:43.395 "prchk_guard": false, 00:23:43.395 "hdgst": false, 00:23:43.395 "ddgst": false, 00:23:43.395 "multipath": "failover", 00:23:43.395 "allow_unrecognized_csi": false, 00:23:43.395 "method": "bdev_nvme_attach_controller", 00:23:43.395 "req_id": 1 00:23:43.395 } 00:23:43.395 Got JSON-RPC error response 00:23:43.395 response: 00:23:43.395 { 00:23:43.395 "code": -114, 00:23:43.395 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:43.395 } 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.395 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.655 NVMe0n1 00:23:43.655 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.655 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.655 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.655 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.655 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.655 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:43.655 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.655 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.914 00:23:43.914 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.914 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.914 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:43.914 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.914 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.914 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.914 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:43.914 14:43:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.851 { 00:23:44.851 "results": [ 00:23:44.851 { 00:23:44.851 "job": "NVMe0n1", 00:23:44.851 "core_mask": "0x1", 00:23:44.851 "workload": "write", 00:23:44.851 "status": "finished", 00:23:44.851 "queue_depth": 128, 00:23:44.851 "io_size": 4096, 00:23:44.851 "runtime": 1.005063, 00:23:44.851 "iops": 20096.25267271803, 00:23:44.851 "mibps": 78.5009870028048, 00:23:44.851 "io_failed": 0, 00:23:44.851 "io_timeout": 0, 00:23:44.851 "avg_latency_us": 6359.506760405321, 00:23:44.851 "min_latency_us": 3372.3733333333334, 00:23:44.851 "max_latency_us": 10868.053333333333 00:23:44.851 } 00:23:44.851 ], 00:23:44.851 "core_count": 1 00:23:44.851 } 00:23:44.851 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:44.851 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.851 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3988608 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3988608 ']' 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3988608 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3988608 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3988608' 00:23:45.111 killing process with pid 3988608 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3988608 00:23:45.111 14:43:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3988608 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:45.111 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:45.111 [2024-11-20 14:43:50.031928] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:45.111 [2024-11-20 14:43:50.031991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3988608 ] 00:23:45.111 [2024-11-20 14:43:50.110968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.111 [2024-11-20 14:43:50.146936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.111 [2024-11-20 14:43:50.790439] bdev.c:4926:bdev_name_add: *ERROR*: Bdev name 802bb2bf-f29c-4dbc-b60c-aeefc9f835b7 already exists 00:23:45.111 [2024-11-20 14:43:50.790469] bdev.c:8146:bdev_register: *ERROR*: Unable to add uuid:802bb2bf-f29c-4dbc-b60c-aeefc9f835b7 alias for bdev NVMe1n1 00:23:45.111 [2024-11-20 14:43:50.790478] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:45.111 Running I/O for 1 seconds... 00:23:45.111 20070.00 IOPS, 78.40 MiB/s 00:23:45.111 Latency(us) 00:23:45.111 [2024-11-20T13:43:52.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.111 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:45.111 NVMe0n1 : 1.01 20096.25 78.50 0.00 0.00 6359.51 3372.37 10868.05 00:23:45.111 [2024-11-20T13:43:52.171Z] =================================================================================================================== 00:23:45.111 [2024-11-20T13:43:52.171Z] Total : 20096.25 78.50 0.00 0.00 6359.51 3372.37 10868.05 00:23:45.111 Received shutdown signal, test time was about 1.000000 seconds 00:23:45.111 00:23:45.111 Latency(us) 00:23:45.111 [2024-11-20T13:43:52.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.111 [2024-11-20T13:43:52.171Z] =================================================================================================================== 00:23:45.111 [2024-11-20T13:43:52.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.111 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.111 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.111 rmmod nvme_tcp 00:23:45.111 rmmod nvme_fabrics 00:23:45.111 rmmod nvme_keyring 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3988460 ']' 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3988460 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3988460 ']' 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3988460 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3988460 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3988460' 00:23:45.371 killing process with pid 3988460 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3988460 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3988460 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.371 14:43:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.353 14:43:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.353 00:23:47.353 real 0m10.342s 00:23:47.353 user 0m11.268s 00:23:47.353 sys 0m4.684s 00:23:47.353 14:43:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.353 14:43:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.353 ************************************ 00:23:47.353 END TEST nvmf_multicontroller 00:23:47.353 ************************************ 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.677 ************************************ 00:23:47.677 START TEST nvmf_aer 00:23:47.677 ************************************ 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:47.677 * Looking for test storage... 00:23:47.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.677 --rc genhtml_branch_coverage=1 00:23:47.677 --rc genhtml_function_coverage=1 00:23:47.677 --rc genhtml_legend=1 00:23:47.677 --rc geninfo_all_blocks=1 00:23:47.677 --rc geninfo_unexecuted_blocks=1 00:23:47.677 00:23:47.677 ' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.677 --rc genhtml_branch_coverage=1 00:23:47.677 --rc genhtml_function_coverage=1 00:23:47.677 --rc genhtml_legend=1 00:23:47.677 --rc geninfo_all_blocks=1 00:23:47.677 --rc geninfo_unexecuted_blocks=1 00:23:47.677 00:23:47.677 ' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.677 --rc genhtml_branch_coverage=1 00:23:47.677 --rc genhtml_function_coverage=1 00:23:47.677 --rc genhtml_legend=1 00:23:47.677 --rc geninfo_all_blocks=1 00:23:47.677 --rc geninfo_unexecuted_blocks=1 00:23:47.677 00:23:47.677 ' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.677 --rc genhtml_branch_coverage=1 00:23:47.677 --rc genhtml_function_coverage=1 00:23:47.677 --rc genhtml_legend=1 00:23:47.677 --rc geninfo_all_blocks=1 00:23:47.677 --rc geninfo_unexecuted_blocks=1 00:23:47.677 00:23:47.677 ' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.677 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.678 14:43:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:52.957 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:52.957 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:52.957 Found net devices under 0000:31:00.0: cvl_0_0 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:52.957 Found net devices under 0000:31:00.1: cvl_0_1 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.957 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.957 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.957 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.216 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.216 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.216 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.216 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.216 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.216 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:23:53.216 00:23:53.216 --- 10.0.0.2 ping statistics --- 00:23:53.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.217 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:23:53.217 00:23:53.217 --- 10.0.0.1 ping statistics --- 00:23:53.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.217 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3993338 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3993338 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3993338 ']' 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.217 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:53.217 [2024-11-20 14:44:00.190733] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:23:53.217 [2024-11-20 14:44:00.190796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.476 [2024-11-20 14:44:00.285880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.476 [2024-11-20 14:44:00.340804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.476 [2024-11-20 14:44:00.340860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.476 [2024-11-20 14:44:00.340869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.476 [2024-11-20 14:44:00.340876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.476 [2024-11-20 14:44:00.340883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.476 [2024-11-20 14:44:00.343000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.476 [2024-11-20 14:44:00.343168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.476 [2024-11-20 14:44:00.343338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.476 [2024-11-20 14:44:00.343340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.045 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.045 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:54.045 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.045 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.045 14:44:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.045 [2024-11-20 14:44:01.007160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.045 Malloc0 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.045 [2024-11-20 14:44:01.057071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.045 [ 00:23:54.045 { 00:23:54.045 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:54.045 "subtype": "Discovery", 00:23:54.045 "listen_addresses": [], 00:23:54.045 "allow_any_host": true, 00:23:54.045 "hosts": [] 00:23:54.045 }, 00:23:54.045 { 00:23:54.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.045 "subtype": "NVMe", 00:23:54.045 "listen_addresses": [ 00:23:54.045 { 00:23:54.045 "trtype": "TCP", 00:23:54.045 "adrfam": "IPv4", 00:23:54.045 "traddr": "10.0.0.2", 00:23:54.045 "trsvcid": "4420" 00:23:54.045 } 00:23:54.045 ], 00:23:54.045 "allow_any_host": true, 00:23:54.045 "hosts": [], 00:23:54.045 "serial_number": "SPDK00000000000001", 00:23:54.045 "model_number": "SPDK bdev Controller", 00:23:54.045 "max_namespaces": 2, 00:23:54.045 "min_cntlid": 1, 00:23:54.045 "max_cntlid": 65519, 00:23:54.045 "namespaces": [ 00:23:54.045 { 00:23:54.045 "nsid": 1, 00:23:54.045 "bdev_name": "Malloc0", 00:23:54.045 "name": "Malloc0", 00:23:54.045 "nguid": "33F89A077B7248A39D8B8440603C88EE", 00:23:54.045 "uuid": "33f89a07-7b72-48a3-9d8b-8440603c88ee" 00:23:54.045 } 00:23:54.045 ] 00:23:54.045 } 00:23:54.045 ] 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3993665 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:54.045 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.304 Malloc1 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.304 Asynchronous Event Request test 00:23:54.304 Attaching to 10.0.0.2 00:23:54.304 Attached to 10.0.0.2 00:23:54.304 Registering asynchronous event callbacks... 00:23:54.304 Starting namespace attribute notice tests for all controllers... 00:23:54.304 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:54.304 aer_cb - Changed Namespace 00:23:54.304 Cleaning up... 00:23:54.304 [ 00:23:54.304 { 00:23:54.304 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:54.304 "subtype": "Discovery", 00:23:54.304 "listen_addresses": [], 00:23:54.304 "allow_any_host": true, 00:23:54.304 "hosts": [] 00:23:54.304 }, 00:23:54.304 { 00:23:54.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.304 "subtype": "NVMe", 00:23:54.304 "listen_addresses": [ 00:23:54.304 { 00:23:54.304 "trtype": "TCP", 00:23:54.304 "adrfam": "IPv4", 00:23:54.304 "traddr": "10.0.0.2", 00:23:54.304 "trsvcid": "4420" 00:23:54.304 } 00:23:54.304 ], 00:23:54.304 "allow_any_host": true, 00:23:54.304 "hosts": [], 00:23:54.304 "serial_number": "SPDK00000000000001", 00:23:54.304 "model_number": "SPDK bdev Controller", 00:23:54.304 "max_namespaces": 2, 00:23:54.304 "min_cntlid": 1, 00:23:54.304 "max_cntlid": 65519, 00:23:54.304 "namespaces": [ 00:23:54.304 { 00:23:54.304 "nsid": 1, 00:23:54.304 "bdev_name": "Malloc0", 00:23:54.304 "name": "Malloc0", 00:23:54.304 "nguid": "33F89A077B7248A39D8B8440603C88EE", 00:23:54.304 "uuid": "33f89a07-7b72-48a3-9d8b-8440603c88ee" 00:23:54.304 }, 00:23:54.304 { 00:23:54.304 "nsid": 2, 00:23:54.304 "bdev_name": "Malloc1", 00:23:54.304 "name": "Malloc1", 00:23:54.304 "nguid": "D448BD63E40E4B419A95931689BB6526", 00:23:54.304 "uuid": "d448bd63-e40e-4b41-9a95-931689bb6526" 00:23:54.304 } 00:23:54.304 ] 00:23:54.304 } 00:23:54.304 ] 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3993665 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.304 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.305 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.305 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:54.305 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.305 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.564 rmmod nvme_tcp 00:23:54.564 rmmod nvme_fabrics 00:23:54.564 rmmod nvme_keyring 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3993338 ']' 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3993338 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3993338 ']' 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3993338 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3993338 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3993338' 00:23:54.564 killing process with pid 3993338 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3993338 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3993338 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.564 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:57.101 00:23:57.101 real 0m9.194s 00:23:57.101 user 0m6.587s 00:23:57.101 sys 0m4.572s 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.101 ************************************ 00:23:57.101 END TEST nvmf_aer 00:23:57.101 ************************************ 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.101 ************************************ 00:23:57.101 START TEST nvmf_async_init 00:23:57.101 ************************************ 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:57.101 * Looking for test storage... 00:23:57.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:57.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.101 --rc genhtml_branch_coverage=1 00:23:57.101 --rc genhtml_function_coverage=1 00:23:57.101 --rc genhtml_legend=1 00:23:57.101 --rc geninfo_all_blocks=1 00:23:57.101 --rc geninfo_unexecuted_blocks=1 00:23:57.101 00:23:57.101 ' 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:57.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.101 --rc genhtml_branch_coverage=1 00:23:57.101 --rc genhtml_function_coverage=1 00:23:57.101 --rc genhtml_legend=1 00:23:57.101 --rc geninfo_all_blocks=1 00:23:57.101 --rc geninfo_unexecuted_blocks=1 00:23:57.101 00:23:57.101 ' 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:57.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.101 --rc genhtml_branch_coverage=1 00:23:57.101 --rc genhtml_function_coverage=1 00:23:57.101 --rc genhtml_legend=1 00:23:57.101 --rc geninfo_all_blocks=1 00:23:57.101 --rc geninfo_unexecuted_blocks=1 00:23:57.101 00:23:57.101 ' 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:57.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.101 --rc genhtml_branch_coverage=1 00:23:57.101 --rc genhtml_function_coverage=1 00:23:57.101 --rc genhtml_legend=1 00:23:57.101 --rc geninfo_all_blocks=1 00:23:57.101 --rc geninfo_unexecuted_blocks=1 00:23:57.101 00:23:57.101 ' 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:57.101 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5c468dd3b1bd4eeebbe13de0a9bb72a7 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.102 14:44:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.375 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:02.376 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:02.376 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:02.376 Found net devices under 0000:31:00.0: cvl_0_0 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:02.376 Found net devices under 0000:31:00.1: cvl_0_1 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:24:02.376 00:24:02.376 --- 10.0.0.2 ping statistics --- 00:24:02.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.376 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:24:02.376 00:24:02.376 --- 10.0.0.1 ping statistics --- 00:24:02.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.376 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3998008 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3998008 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3998008 ']' 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.376 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:02.376 [2024-11-20 14:44:09.336900] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:24:02.376 [2024-11-20 14:44:09.336960] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.376 [2024-11-20 14:44:09.430960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.635 [2024-11-20 14:44:09.483350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.635 [2024-11-20 14:44:09.483399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.635 [2024-11-20 14:44:09.483408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.635 [2024-11-20 14:44:09.483415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.635 [2024-11-20 14:44:09.483422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.635 [2024-11-20 14:44:09.484276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.201 [2024-11-20 14:44:10.173559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.201 null0 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5c468dd3b1bd4eeebbe13de0a9bb72a7 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.201 [2024-11-20 14:44:10.213862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.201 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.460 nvme0n1 00:24:03.460 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.460 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:03.460 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.460 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.460 [ 00:24:03.460 { 00:24:03.460 "name": "nvme0n1", 00:24:03.460 "aliases": [ 00:24:03.460 "5c468dd3-b1bd-4eee-bbe1-3de0a9bb72a7" 00:24:03.460 ], 00:24:03.460 "product_name": "NVMe disk", 00:24:03.460 "block_size": 512, 00:24:03.460 "num_blocks": 2097152, 00:24:03.460 "uuid": "5c468dd3-b1bd-4eee-bbe1-3de0a9bb72a7", 00:24:03.460 "numa_id": 0, 00:24:03.460 "assigned_rate_limits": { 00:24:03.460 "rw_ios_per_sec": 0, 00:24:03.460 "rw_mbytes_per_sec": 0, 00:24:03.460 "r_mbytes_per_sec": 0, 00:24:03.460 "w_mbytes_per_sec": 0 00:24:03.460 }, 00:24:03.460 "claimed": false, 00:24:03.460 "zoned": false, 00:24:03.460 "supported_io_types": { 00:24:03.460 "read": true, 00:24:03.460 "write": true, 00:24:03.460 "unmap": false, 00:24:03.460 "flush": true, 00:24:03.460 "reset": true, 00:24:03.460 "nvme_admin": true, 00:24:03.460 "nvme_io": true, 00:24:03.460 "nvme_io_md": false, 00:24:03.460 "write_zeroes": true, 00:24:03.460 "zcopy": false, 00:24:03.460 "get_zone_info": false, 00:24:03.460 "zone_management": false, 00:24:03.460 "zone_append": false, 00:24:03.460 "compare": true, 00:24:03.460 "compare_and_write": true, 00:24:03.460 "abort": true, 00:24:03.460 "seek_hole": false, 00:24:03.460 "seek_data": false, 00:24:03.460 "copy": true, 00:24:03.460 "nvme_iov_md": false 00:24:03.460 }, 00:24:03.460 "memory_domains": [ 00:24:03.460 { 00:24:03.460 "dma_device_id": "system", 00:24:03.461 "dma_device_type": 1 00:24:03.461 } 00:24:03.461 ], 00:24:03.461 "driver_specific": { 00:24:03.461 "nvme": [ 00:24:03.461 { 00:24:03.461 "trid": { 00:24:03.461 "trtype": "TCP", 00:24:03.461 "adrfam": "IPv4", 00:24:03.461 "traddr": "10.0.0.2", 00:24:03.461 "trsvcid": "4420", 00:24:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:03.461 }, 00:24:03.461 "ctrlr_data": { 00:24:03.461 "cntlid": 1, 00:24:03.461 "vendor_id": "0x8086", 00:24:03.461 "model_number": "SPDK bdev Controller", 00:24:03.461 "serial_number": "00000000000000000000", 00:24:03.461 "firmware_revision": "25.01", 00:24:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.461 "oacs": { 00:24:03.461 "security": 0, 00:24:03.461 "format": 0, 00:24:03.461 "firmware": 0, 00:24:03.461 "ns_manage": 0 00:24:03.461 }, 00:24:03.461 "multi_ctrlr": true, 00:24:03.461 "ana_reporting": false 00:24:03.461 }, 00:24:03.461 "vs": { 00:24:03.461 "nvme_version": "1.3" 00:24:03.461 }, 00:24:03.461 "ns_data": { 00:24:03.461 "id": 1, 00:24:03.461 "can_share": true 00:24:03.461 } 00:24:03.461 } 00:24:03.461 ], 00:24:03.461 "mp_policy": "active_passive" 00:24:03.461 } 00:24:03.461 } 00:24:03.461 ] 00:24:03.461 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.461 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:03.461 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.461 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.461 [2024-11-20 14:44:10.463348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.461 [2024-11-20 14:44:10.463431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf12dd0 (9): Bad file descriptor 00:24:03.722 [2024-11-20 14:44:10.595346] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.722 [ 00:24:03.722 { 00:24:03.722 "name": "nvme0n1", 00:24:03.722 "aliases": [ 00:24:03.722 "5c468dd3-b1bd-4eee-bbe1-3de0a9bb72a7" 00:24:03.722 ], 00:24:03.722 "product_name": "NVMe disk", 00:24:03.722 "block_size": 512, 00:24:03.722 "num_blocks": 2097152, 00:24:03.722 "uuid": "5c468dd3-b1bd-4eee-bbe1-3de0a9bb72a7", 00:24:03.722 "numa_id": 0, 00:24:03.722 "assigned_rate_limits": { 00:24:03.722 "rw_ios_per_sec": 0, 00:24:03.722 "rw_mbytes_per_sec": 0, 00:24:03.722 "r_mbytes_per_sec": 0, 00:24:03.722 "w_mbytes_per_sec": 0 00:24:03.722 }, 00:24:03.722 "claimed": false, 00:24:03.722 "zoned": false, 00:24:03.722 "supported_io_types": { 00:24:03.722 "read": true, 00:24:03.722 "write": true, 00:24:03.722 "unmap": false, 00:24:03.722 "flush": true, 00:24:03.722 "reset": true, 00:24:03.722 "nvme_admin": true, 00:24:03.722 "nvme_io": true, 00:24:03.722 "nvme_io_md": false, 00:24:03.722 "write_zeroes": true, 00:24:03.722 "zcopy": false, 00:24:03.722 "get_zone_info": false, 00:24:03.722 "zone_management": false, 00:24:03.722 "zone_append": false, 00:24:03.722 "compare": true, 00:24:03.722 "compare_and_write": true, 00:24:03.722 "abort": true, 00:24:03.722 "seek_hole": false, 00:24:03.722 "seek_data": false, 00:24:03.722 "copy": true, 00:24:03.722 "nvme_iov_md": false 00:24:03.722 }, 00:24:03.722 "memory_domains": [ 00:24:03.722 { 00:24:03.722 "dma_device_id": "system", 00:24:03.722 "dma_device_type": 1 00:24:03.722 } 00:24:03.722 ], 00:24:03.722 "driver_specific": { 00:24:03.722 "nvme": [ 00:24:03.722 { 00:24:03.722 "trid": { 00:24:03.722 "trtype": "TCP", 00:24:03.722 "adrfam": "IPv4", 00:24:03.722 "traddr": "10.0.0.2", 00:24:03.722 "trsvcid": "4420", 00:24:03.722 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:03.722 }, 00:24:03.722 "ctrlr_data": { 00:24:03.722 "cntlid": 2, 00:24:03.722 "vendor_id": "0x8086", 00:24:03.722 "model_number": "SPDK bdev Controller", 00:24:03.722 "serial_number": "00000000000000000000", 00:24:03.722 "firmware_revision": "25.01", 00:24:03.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.722 "oacs": { 00:24:03.722 "security": 0, 00:24:03.722 "format": 0, 00:24:03.722 "firmware": 0, 00:24:03.722 "ns_manage": 0 00:24:03.722 }, 00:24:03.722 "multi_ctrlr": true, 00:24:03.722 "ana_reporting": false 00:24:03.722 }, 00:24:03.722 "vs": { 00:24:03.722 "nvme_version": "1.3" 00:24:03.722 }, 00:24:03.722 "ns_data": { 00:24:03.722 "id": 1, 00:24:03.722 "can_share": true 00:24:03.722 } 00:24:03.722 } 00:24:03.722 ], 00:24:03.722 "mp_policy": "active_passive" 00:24:03.722 } 00:24:03.722 } 00:24:03.722 ] 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.8zuFkEVhO8 00:24:03.722 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.8zuFkEVhO8 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.8zuFkEVhO8 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.723 [2024-11-20 14:44:10.651953] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:03.723 [2024-11-20 14:44:10.652116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.723 [2024-11-20 14:44:10.668013] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.723 nvme0n1 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.723 [ 00:24:03.723 { 00:24:03.723 "name": "nvme0n1", 00:24:03.723 "aliases": [ 00:24:03.723 "5c468dd3-b1bd-4eee-bbe1-3de0a9bb72a7" 00:24:03.723 ], 00:24:03.723 "product_name": "NVMe disk", 00:24:03.723 "block_size": 512, 00:24:03.723 "num_blocks": 2097152, 00:24:03.723 "uuid": "5c468dd3-b1bd-4eee-bbe1-3de0a9bb72a7", 00:24:03.723 "numa_id": 0, 00:24:03.723 "assigned_rate_limits": { 00:24:03.723 "rw_ios_per_sec": 0, 00:24:03.723 "rw_mbytes_per_sec": 0, 00:24:03.723 "r_mbytes_per_sec": 0, 00:24:03.723 "w_mbytes_per_sec": 0 00:24:03.723 }, 00:24:03.723 "claimed": false, 00:24:03.723 "zoned": false, 00:24:03.723 "supported_io_types": { 00:24:03.723 "read": true, 00:24:03.723 "write": true, 00:24:03.723 "unmap": false, 00:24:03.723 "flush": true, 00:24:03.723 "reset": true, 00:24:03.723 "nvme_admin": true, 00:24:03.723 "nvme_io": true, 00:24:03.723 "nvme_io_md": false, 00:24:03.723 "write_zeroes": true, 00:24:03.723 "zcopy": false, 00:24:03.723 "get_zone_info": false, 00:24:03.723 "zone_management": false, 00:24:03.723 "zone_append": false, 00:24:03.723 "compare": true, 00:24:03.723 "compare_and_write": true, 00:24:03.723 "abort": true, 00:24:03.723 "seek_hole": false, 00:24:03.723 "seek_data": false, 00:24:03.723 "copy": true, 00:24:03.723 "nvme_iov_md": false 00:24:03.723 }, 00:24:03.723 "memory_domains": [ 00:24:03.723 { 00:24:03.723 "dma_device_id": "system", 00:24:03.723 "dma_device_type": 1 00:24:03.723 } 00:24:03.723 ], 00:24:03.723 "driver_specific": { 00:24:03.723 "nvme": [ 00:24:03.723 { 00:24:03.723 "trid": { 00:24:03.723 "trtype": "TCP", 00:24:03.723 "adrfam": "IPv4", 00:24:03.723 "traddr": "10.0.0.2", 00:24:03.723 "trsvcid": "4421", 00:24:03.723 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:03.723 }, 00:24:03.723 "ctrlr_data": { 00:24:03.723 "cntlid": 3, 00:24:03.723 "vendor_id": "0x8086", 00:24:03.723 "model_number": "SPDK bdev Controller", 00:24:03.723 "serial_number": "00000000000000000000", 00:24:03.723 "firmware_revision": "25.01", 00:24:03.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.723 "oacs": { 00:24:03.723 "security": 0, 00:24:03.723 "format": 0, 00:24:03.723 "firmware": 0, 00:24:03.723 "ns_manage": 0 00:24:03.723 }, 00:24:03.723 "multi_ctrlr": true, 00:24:03.723 "ana_reporting": false 00:24:03.723 }, 00:24:03.723 "vs": { 00:24:03.723 "nvme_version": "1.3" 00:24:03.723 }, 00:24:03.723 "ns_data": { 00:24:03.723 "id": 1, 00:24:03.723 "can_share": true 00:24:03.723 } 00:24:03.723 } 00:24:03.723 ], 00:24:03.723 "mp_policy": "active_passive" 00:24:03.723 } 00:24:03.723 } 00:24:03.723 ] 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.8zuFkEVhO8 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.723 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.723 rmmod nvme_tcp 00:24:03.983 rmmod nvme_fabrics 00:24:03.983 rmmod nvme_keyring 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3998008 ']' 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3998008 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3998008 ']' 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3998008 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3998008 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3998008' 00:24:03.983 killing process with pid 3998008 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3998008 00:24:03.983 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3998008 00:24:03.983 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.983 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.983 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.983 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:03.983 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:03.983 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.983 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.242 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.242 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.242 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.242 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.242 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.145 14:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.146 00:24:06.146 real 0m9.400s 00:24:06.146 user 0m3.291s 00:24:06.146 sys 0m4.502s 00:24:06.146 14:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.146 14:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.146 ************************************ 00:24:06.146 END TEST nvmf_async_init 00:24:06.146 ************************************ 00:24:06.146 14:44:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:06.146 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.146 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.146 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.146 ************************************ 00:24:06.146 START TEST dma 00:24:06.146 ************************************ 00:24:06.146 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:06.146 * Looking for test storage... 00:24:06.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.405 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.405 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.405 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.406 --rc genhtml_branch_coverage=1 00:24:06.406 --rc genhtml_function_coverage=1 00:24:06.406 --rc genhtml_legend=1 00:24:06.406 --rc geninfo_all_blocks=1 00:24:06.406 --rc geninfo_unexecuted_blocks=1 00:24:06.406 00:24:06.406 ' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.406 --rc genhtml_branch_coverage=1 00:24:06.406 --rc genhtml_function_coverage=1 00:24:06.406 --rc genhtml_legend=1 00:24:06.406 --rc geninfo_all_blocks=1 00:24:06.406 --rc geninfo_unexecuted_blocks=1 00:24:06.406 00:24:06.406 ' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.406 --rc genhtml_branch_coverage=1 00:24:06.406 --rc genhtml_function_coverage=1 00:24:06.406 --rc genhtml_legend=1 00:24:06.406 --rc geninfo_all_blocks=1 00:24:06.406 --rc geninfo_unexecuted_blocks=1 00:24:06.406 00:24:06.406 ' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.406 --rc genhtml_branch_coverage=1 00:24:06.406 --rc genhtml_function_coverage=1 00:24:06.406 --rc genhtml_legend=1 00:24:06.406 --rc geninfo_all_blocks=1 00:24:06.406 --rc geninfo_unexecuted_blocks=1 00:24:06.406 00:24:06.406 ' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:06.406 00:24:06.406 real 0m0.154s 00:24:06.406 user 0m0.088s 00:24:06.406 sys 0m0.075s 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.406 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:06.406 ************************************ 00:24:06.407 END TEST dma 00:24:06.407 ************************************ 00:24:06.407 14:44:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:06.407 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.407 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.407 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.407 ************************************ 00:24:06.407 START TEST nvmf_identify 00:24:06.407 ************************************ 00:24:06.407 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:06.407 * Looking for test storage... 00:24:06.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.407 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.407 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.407 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:06.666 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.667 --rc genhtml_branch_coverage=1 00:24:06.667 --rc genhtml_function_coverage=1 00:24:06.667 --rc genhtml_legend=1 00:24:06.667 --rc geninfo_all_blocks=1 00:24:06.667 --rc geninfo_unexecuted_blocks=1 00:24:06.667 00:24:06.667 ' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.667 --rc genhtml_branch_coverage=1 00:24:06.667 --rc genhtml_function_coverage=1 00:24:06.667 --rc genhtml_legend=1 00:24:06.667 --rc geninfo_all_blocks=1 00:24:06.667 --rc geninfo_unexecuted_blocks=1 00:24:06.667 00:24:06.667 ' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.667 --rc genhtml_branch_coverage=1 00:24:06.667 --rc genhtml_function_coverage=1 00:24:06.667 --rc genhtml_legend=1 00:24:06.667 --rc geninfo_all_blocks=1 00:24:06.667 --rc geninfo_unexecuted_blocks=1 00:24:06.667 00:24:06.667 ' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.667 --rc genhtml_branch_coverage=1 00:24:06.667 --rc genhtml_function_coverage=1 00:24:06.667 --rc genhtml_legend=1 00:24:06.667 --rc geninfo_all_blocks=1 00:24:06.667 --rc geninfo_unexecuted_blocks=1 00:24:06.667 00:24:06.667 ' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.667 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:11.969 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:11.969 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:11.969 Found net devices under 0000:31:00.0: cvl_0_0 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:11.969 Found net devices under 0000:31:00.1: cvl_0_1 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:11.969 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:11.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:24:11.970 00:24:11.970 --- 10.0.0.2 ping statistics --- 00:24:11.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.970 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:24:11.970 00:24:11.970 --- 10.0.0.1 ping statistics --- 00:24:11.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.970 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4002779 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4002779 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 4002779 ']' 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:11.970 14:44:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:11.970 [2024-11-20 14:44:18.953686] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:24:11.970 [2024-11-20 14:44:18.953755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.231 [2024-11-20 14:44:19.046344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.231 [2024-11-20 14:44:19.101241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.231 [2024-11-20 14:44:19.101306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.231 [2024-11-20 14:44:19.101316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.231 [2024-11-20 14:44:19.101323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.231 [2024-11-20 14:44:19.101330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.231 [2024-11-20 14:44:19.103343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.231 [2024-11-20 14:44:19.103512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.231 [2024-11-20 14:44:19.103663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.231 [2024-11-20 14:44:19.103664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.807 [2024-11-20 14:44:19.765749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.807 Malloc0 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.807 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.808 [2024-11-20 14:44:19.845092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.808 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.808 [ 00:24:12.808 { 00:24:12.808 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:12.808 "subtype": "Discovery", 00:24:12.808 "listen_addresses": [ 00:24:12.808 { 00:24:12.808 "trtype": "TCP", 00:24:12.808 "adrfam": "IPv4", 00:24:12.808 "traddr": "10.0.0.2", 00:24:12.808 "trsvcid": "4420" 00:24:12.808 } 00:24:12.808 ], 00:24:12.808 "allow_any_host": true, 00:24:12.808 "hosts": [] 00:24:12.808 }, 00:24:12.808 { 00:24:12.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.808 "subtype": "NVMe", 00:24:12.808 "listen_addresses": [ 00:24:12.808 { 00:24:12.808 "trtype": "TCP", 00:24:12.808 "adrfam": "IPv4", 00:24:12.808 "traddr": "10.0.0.2", 00:24:12.808 "trsvcid": "4420" 00:24:12.808 } 00:24:12.808 ], 00:24:12.808 "allow_any_host": true, 00:24:12.808 "hosts": [], 00:24:12.808 "serial_number": "SPDK00000000000001", 00:24:12.808 "model_number": "SPDK bdev Controller", 00:24:12.808 "max_namespaces": 32, 00:24:12.808 "min_cntlid": 1, 00:24:12.808 "max_cntlid": 65519, 00:24:12.808 "namespaces": [ 00:24:12.808 { 00:24:12.808 "nsid": 1, 00:24:12.808 "bdev_name": "Malloc0", 00:24:13.069 "name": "Malloc0", 00:24:13.069 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:13.069 "eui64": "ABCDEF0123456789", 00:24:13.069 "uuid": "abdec078-a2ab-4297-9c5e-7657f4bfb103" 00:24:13.069 } 00:24:13.069 ] 00:24:13.069 } 00:24:13.069 ] 00:24:13.069 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.069 14:44:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:13.069 [2024-11-20 14:44:19.881007] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:24:13.069 [2024-11-20 14:44:19.881037] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003116 ] 00:24:13.069 [2024-11-20 14:44:19.935443] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:13.069 [2024-11-20 14:44:19.935496] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:13.069 [2024-11-20 14:44:19.935502] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:13.069 [2024-11-20 14:44:19.935516] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:13.069 [2024-11-20 14:44:19.935530] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:13.069 [2024-11-20 14:44:19.936255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:13.069 [2024-11-20 14:44:19.936294] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf2b550 0 00:24:13.069 [2024-11-20 14:44:19.942259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:13.069 [2024-11-20 14:44:19.942272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:13.069 [2024-11-20 14:44:19.942277] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:13.069 [2024-11-20 14:44:19.942281] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:13.069 [2024-11-20 14:44:19.942312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.069 [2024-11-20 14:44:19.942318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.069 [2024-11-20 14:44:19.942322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.069 [2024-11-20 14:44:19.942335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:13.069 [2024-11-20 14:44:19.942353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.069 [2024-11-20 14:44:19.949254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.069 [2024-11-20 14:44:19.949263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.069 [2024-11-20 14:44:19.949267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.069 [2024-11-20 14:44:19.949272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.069 [2024-11-20 14:44:19.949282] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:13.069 [2024-11-20 14:44:19.949288] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:13.069 [2024-11-20 14:44:19.949294] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:13.069 [2024-11-20 14:44:19.949307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.069 [2024-11-20 14:44:19.949311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.069 [2024-11-20 14:44:19.949315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.069 [2024-11-20 14:44:19.949322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.069 [2024-11-20 14:44:19.949336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.069 [2024-11-20 14:44:19.949555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.069 [2024-11-20 14:44:19.949562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.069 [2024-11-20 14:44:19.949566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.069 [2024-11-20 14:44:19.949569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.069 [2024-11-20 14:44:19.949575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:13.069 [2024-11-20 14:44:19.949582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:13.069 [2024-11-20 14:44:19.949589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.069 [2024-11-20 14:44:19.949593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.069 [2024-11-20 14:44:19.949596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.069 [2024-11-20 14:44:19.949603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.070 [2024-11-20 14:44:19.949618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.070 [2024-11-20 14:44:19.949805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.070 [2024-11-20 14:44:19.949812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.070 [2024-11-20 14:44:19.949815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.949819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.070 [2024-11-20 14:44:19.949824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:13.070 [2024-11-20 14:44:19.949832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:13.070 [2024-11-20 14:44:19.949839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.949843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.949846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.070 [2024-11-20 14:44:19.949853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.070 [2024-11-20 14:44:19.949863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.070 [2024-11-20 14:44:19.950057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.070 [2024-11-20 14:44:19.950063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.070 [2024-11-20 14:44:19.950066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.950070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.070 [2024-11-20 14:44:19.950075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:13.070 [2024-11-20 14:44:19.950084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.950088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.950092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.070 [2024-11-20 14:44:19.950099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.070 [2024-11-20 14:44:19.950109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.070 [2024-11-20 14:44:19.950360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.070 [2024-11-20 14:44:19.950367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.070 [2024-11-20 14:44:19.950371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.950375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.070 [2024-11-20 14:44:19.950379] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:13.070 [2024-11-20 14:44:19.950384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:13.070 [2024-11-20 14:44:19.950392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:13.070 [2024-11-20 14:44:19.950500] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:13.070 [2024-11-20 14:44:19.950504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:13.070 [2024-11-20 14:44:19.950513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.950516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.950522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.070 [2024-11-20 14:44:19.950529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.070 [2024-11-20 14:44:19.950539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.070 [2024-11-20 14:44:19.950715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.070 [2024-11-20 14:44:19.950722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.070 [2024-11-20 14:44:19.950725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.950729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.070 [2024-11-20 14:44:19.950734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:13.070 [2024-11-20 14:44:19.950743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.950747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.950750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.070 [2024-11-20 14:44:19.950757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.070 [2024-11-20 14:44:19.950767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.070 [2024-11-20 14:44:19.951009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.070 [2024-11-20 14:44:19.951015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.070 [2024-11-20 14:44:19.951019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.070 [2024-11-20 14:44:19.951027] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:13.070 [2024-11-20 14:44:19.951032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:13.070 [2024-11-20 14:44:19.951039] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:13.070 [2024-11-20 14:44:19.951053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:13.070 [2024-11-20 14:44:19.951061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.070 [2024-11-20 14:44:19.951072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.070 [2024-11-20 14:44:19.951082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.070 [2024-11-20 14:44:19.951303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.070 [2024-11-20 14:44:19.951310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.070 [2024-11-20 14:44:19.951314] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951318] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf2b550): datao=0, datal=4096, cccid=0 00:24:13.070 [2024-11-20 14:44:19.951323] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8d100) on tqpair(0xf2b550): expected_datao=0, payload_size=4096 00:24:13.070 [2024-11-20 14:44:19.951327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951335] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951340] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.070 [2024-11-20 14:44:19.951469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.070 [2024-11-20 14:44:19.951473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.070 [2024-11-20 14:44:19.951484] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:13.070 [2024-11-20 14:44:19.951489] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:13.070 [2024-11-20 14:44:19.951493] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:13.070 [2024-11-20 14:44:19.951501] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:13.070 [2024-11-20 14:44:19.951506] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:13.070 [2024-11-20 14:44:19.951511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:13.070 [2024-11-20 14:44:19.951521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:13.070 [2024-11-20 14:44:19.951528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.070 [2024-11-20 14:44:19.951543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:13.070 [2024-11-20 14:44:19.951554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.070 [2024-11-20 14:44:19.951763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.070 [2024-11-20 14:44:19.951769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.070 [2024-11-20 14:44:19.951773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.070 [2024-11-20 14:44:19.951784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf2b550) 00:24:13.070 [2024-11-20 14:44:19.951798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.070 [2024-11-20 14:44:19.951804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf2b550) 00:24:13.070 [2024-11-20 14:44:19.951817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.070 [2024-11-20 14:44:19.951824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.070 [2024-11-20 14:44:19.951831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf2b550) 00:24:13.070 [2024-11-20 14:44:19.951837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.070 [2024-11-20 14:44:19.951843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.951846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.951852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.071 [2024-11-20 14:44:19.951858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.071 [2024-11-20 14:44:19.951863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:13.071 [2024-11-20 14:44:19.951871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:13.071 [2024-11-20 14:44:19.951878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.951881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf2b550) 00:24:13.071 [2024-11-20 14:44:19.951888] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.071 [2024-11-20 14:44:19.951900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d100, cid 0, qid 0 00:24:13.071 [2024-11-20 14:44:19.951905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d280, cid 1, qid 0 00:24:13.071 [2024-11-20 14:44:19.951910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d400, cid 2, qid 0 00:24:13.071 [2024-11-20 14:44:19.951915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.071 [2024-11-20 14:44:19.951920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d700, cid 4, qid 0 00:24:13.071 [2024-11-20 14:44:19.952135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.071 [2024-11-20 14:44:19.952141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.071 [2024-11-20 14:44:19.952145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d700) on tqpair=0xf2b550 00:24:13.071 [2024-11-20 14:44:19.952156] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:13.071 [2024-11-20 14:44:19.952161] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:13.071 [2024-11-20 14:44:19.952172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf2b550) 00:24:13.071 [2024-11-20 14:44:19.952182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.071 [2024-11-20 14:44:19.952192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d700, cid 4, qid 0 00:24:13.071 [2024-11-20 14:44:19.952372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.071 [2024-11-20 14:44:19.952378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.071 [2024-11-20 14:44:19.952382] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952386] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf2b550): datao=0, datal=4096, cccid=4 00:24:13.071 [2024-11-20 14:44:19.952390] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8d700) on tqpair(0xf2b550): expected_datao=0, payload_size=4096 00:24:13.071 [2024-11-20 14:44:19.952395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952405] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952409] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.071 [2024-11-20 14:44:19.952594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.071 [2024-11-20 14:44:19.952597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d700) on tqpair=0xf2b550 00:24:13.071 [2024-11-20 14:44:19.952614] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:13.071 [2024-11-20 14:44:19.952636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf2b550) 00:24:13.071 [2024-11-20 14:44:19.952647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.071 [2024-11-20 14:44:19.952654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf2b550) 00:24:13.071 [2024-11-20 14:44:19.952667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.071 [2024-11-20 14:44:19.952681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d700, cid 4, qid 0 00:24:13.071 [2024-11-20 14:44:19.952686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d880, cid 5, qid 0 00:24:13.071 [2024-11-20 14:44:19.952924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.071 [2024-11-20 14:44:19.952930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.071 [2024-11-20 14:44:19.952934] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952937] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf2b550): datao=0, datal=1024, cccid=4 00:24:13.071 [2024-11-20 14:44:19.952942] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8d700) on tqpair(0xf2b550): expected_datao=0, payload_size=1024 00:24:13.071 [2024-11-20 14:44:19.952946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952953] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952956] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.071 [2024-11-20 14:44:19.952968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.071 [2024-11-20 14:44:19.952972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.952975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d880) on tqpair=0xf2b550 00:24:13.071 [2024-11-20 14:44:19.997252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.071 [2024-11-20 14:44:19.997264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.071 [2024-11-20 14:44:19.997268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.997272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d700) on tqpair=0xf2b550 00:24:13.071 [2024-11-20 14:44:19.997284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.997288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf2b550) 00:24:13.071 [2024-11-20 14:44:19.997295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.071 [2024-11-20 14:44:19.997312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d700, cid 4, qid 0 00:24:13.071 [2024-11-20 14:44:19.997497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.071 [2024-11-20 14:44:19.997504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.071 [2024-11-20 14:44:19.997508] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.997512] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf2b550): datao=0, datal=3072, cccid=4 00:24:13.071 [2024-11-20 14:44:19.997516] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8d700) on tqpair(0xf2b550): expected_datao=0, payload_size=3072 00:24:13.071 [2024-11-20 14:44:19.997524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.997531] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.997535] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.997713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.071 [2024-11-20 14:44:19.997719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.071 [2024-11-20 14:44:19.997723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.997727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d700) on tqpair=0xf2b550 00:24:13.071 [2024-11-20 14:44:19.997735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.997739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf2b550) 00:24:13.071 [2024-11-20 14:44:19.997745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.071 [2024-11-20 14:44:19.997759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d700, cid 4, qid 0 00:24:13.071 [2024-11-20 14:44:19.998014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.071 [2024-11-20 14:44:19.998020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.071 [2024-11-20 14:44:19.998024] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.998027] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf2b550): datao=0, datal=8, cccid=4 00:24:13.071 [2024-11-20 14:44:19.998032] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf8d700) on tqpair(0xf2b550): expected_datao=0, payload_size=8 00:24:13.071 [2024-11-20 14:44:19.998036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.998043] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:19.998046] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:20.038322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.071 [2024-11-20 14:44:20.038333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.071 [2024-11-20 14:44:20.038337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.071 [2024-11-20 14:44:20.038341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d700) on tqpair=0xf2b550 00:24:13.071 ===================================================== 00:24:13.071 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:13.071 ===================================================== 00:24:13.071 Controller Capabilities/Features 00:24:13.071 ================================ 00:24:13.071 Vendor ID: 0000 00:24:13.071 Subsystem Vendor ID: 0000 00:24:13.071 Serial Number: .................... 00:24:13.071 Model Number: ........................................ 00:24:13.071 Firmware Version: 25.01 00:24:13.071 Recommended Arb Burst: 0 00:24:13.071 IEEE OUI Identifier: 00 00 00 00:24:13.071 Multi-path I/O 00:24:13.071 May have multiple subsystem ports: No 00:24:13.071 May have multiple controllers: No 00:24:13.071 Associated with SR-IOV VF: No 00:24:13.071 Max Data Transfer Size: 131072 00:24:13.072 Max Number of Namespaces: 0 00:24:13.072 Max Number of I/O Queues: 1024 00:24:13.072 NVMe Specification Version (VS): 1.3 00:24:13.072 NVMe Specification Version (Identify): 1.3 00:24:13.072 Maximum Queue Entries: 128 00:24:13.072 Contiguous Queues Required: Yes 00:24:13.072 Arbitration Mechanisms Supported 00:24:13.072 Weighted Round Robin: Not Supported 00:24:13.072 Vendor Specific: Not Supported 00:24:13.072 Reset Timeout: 15000 ms 00:24:13.072 Doorbell Stride: 4 bytes 00:24:13.072 NVM Subsystem Reset: Not Supported 00:24:13.072 Command Sets Supported 00:24:13.072 NVM Command Set: Supported 00:24:13.072 Boot Partition: Not Supported 00:24:13.072 Memory Page Size Minimum: 4096 bytes 00:24:13.072 Memory Page Size Maximum: 4096 bytes 00:24:13.072 Persistent Memory Region: Not Supported 00:24:13.072 Optional Asynchronous Events Supported 00:24:13.072 Namespace Attribute Notices: Not Supported 00:24:13.072 Firmware Activation Notices: Not Supported 00:24:13.072 ANA Change Notices: Not Supported 00:24:13.072 PLE Aggregate Log Change Notices: Not Supported 00:24:13.072 LBA Status Info Alert Notices: Not Supported 00:24:13.072 EGE Aggregate Log Change Notices: Not Supported 00:24:13.072 Normal NVM Subsystem Shutdown event: Not Supported 00:24:13.072 Zone Descriptor Change Notices: Not Supported 00:24:13.072 Discovery Log Change Notices: Supported 00:24:13.072 Controller Attributes 00:24:13.072 128-bit Host Identifier: Not Supported 00:24:13.072 Non-Operational Permissive Mode: Not Supported 00:24:13.072 NVM Sets: Not Supported 00:24:13.072 Read Recovery Levels: Not Supported 00:24:13.072 Endurance Groups: Not Supported 00:24:13.072 Predictable Latency Mode: Not Supported 00:24:13.072 Traffic Based Keep ALive: Not Supported 00:24:13.072 Namespace Granularity: Not Supported 00:24:13.072 SQ Associations: Not Supported 00:24:13.072 UUID List: Not Supported 00:24:13.072 Multi-Domain Subsystem: Not Supported 00:24:13.072 Fixed Capacity Management: Not Supported 00:24:13.072 Variable Capacity Management: Not Supported 00:24:13.072 Delete Endurance Group: Not Supported 00:24:13.072 Delete NVM Set: Not Supported 00:24:13.072 Extended LBA Formats Supported: Not Supported 00:24:13.072 Flexible Data Placement Supported: Not Supported 00:24:13.072 00:24:13.072 Controller Memory Buffer Support 00:24:13.072 ================================ 00:24:13.072 Supported: No 00:24:13.072 00:24:13.072 Persistent Memory Region Support 00:24:13.072 ================================ 00:24:13.072 Supported: No 00:24:13.072 00:24:13.072 Admin Command Set Attributes 00:24:13.072 ============================ 00:24:13.072 Security Send/Receive: Not Supported 00:24:13.072 Format NVM: Not Supported 00:24:13.072 Firmware Activate/Download: Not Supported 00:24:13.072 Namespace Management: Not Supported 00:24:13.072 Device Self-Test: Not Supported 00:24:13.072 Directives: Not Supported 00:24:13.072 NVMe-MI: Not Supported 00:24:13.072 Virtualization Management: Not Supported 00:24:13.072 Doorbell Buffer Config: Not Supported 00:24:13.072 Get LBA Status Capability: Not Supported 00:24:13.072 Command & Feature Lockdown Capability: Not Supported 00:24:13.072 Abort Command Limit: 1 00:24:13.072 Async Event Request Limit: 4 00:24:13.072 Number of Firmware Slots: N/A 00:24:13.072 Firmware Slot 1 Read-Only: N/A 00:24:13.072 Firmware Activation Without Reset: N/A 00:24:13.072 Multiple Update Detection Support: N/A 00:24:13.072 Firmware Update Granularity: No Information Provided 00:24:13.072 Per-Namespace SMART Log: No 00:24:13.072 Asymmetric Namespace Access Log Page: Not Supported 00:24:13.072 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:13.072 Command Effects Log Page: Not Supported 00:24:13.072 Get Log Page Extended Data: Supported 00:24:13.072 Telemetry Log Pages: Not Supported 00:24:13.072 Persistent Event Log Pages: Not Supported 00:24:13.072 Supported Log Pages Log Page: May Support 00:24:13.072 Commands Supported & Effects Log Page: Not Supported 00:24:13.072 Feature Identifiers & Effects Log Page:May Support 00:24:13.072 NVMe-MI Commands & Effects Log Page: May Support 00:24:13.072 Data Area 4 for Telemetry Log: Not Supported 00:24:13.072 Error Log Page Entries Supported: 128 00:24:13.072 Keep Alive: Not Supported 00:24:13.072 00:24:13.072 NVM Command Set Attributes 00:24:13.072 ========================== 00:24:13.072 Submission Queue Entry Size 00:24:13.072 Max: 1 00:24:13.072 Min: 1 00:24:13.072 Completion Queue Entry Size 00:24:13.072 Max: 1 00:24:13.072 Min: 1 00:24:13.072 Number of Namespaces: 0 00:24:13.072 Compare Command: Not Supported 00:24:13.072 Write Uncorrectable Command: Not Supported 00:24:13.072 Dataset Management Command: Not Supported 00:24:13.072 Write Zeroes Command: Not Supported 00:24:13.072 Set Features Save Field: Not Supported 00:24:13.072 Reservations: Not Supported 00:24:13.072 Timestamp: Not Supported 00:24:13.072 Copy: Not Supported 00:24:13.072 Volatile Write Cache: Not Present 00:24:13.072 Atomic Write Unit (Normal): 1 00:24:13.072 Atomic Write Unit (PFail): 1 00:24:13.072 Atomic Compare & Write Unit: 1 00:24:13.072 Fused Compare & Write: Supported 00:24:13.072 Scatter-Gather List 00:24:13.072 SGL Command Set: Supported 00:24:13.072 SGL Keyed: Supported 00:24:13.072 SGL Bit Bucket Descriptor: Not Supported 00:24:13.072 SGL Metadata Pointer: Not Supported 00:24:13.072 Oversized SGL: Not Supported 00:24:13.072 SGL Metadata Address: Not Supported 00:24:13.072 SGL Offset: Supported 00:24:13.072 Transport SGL Data Block: Not Supported 00:24:13.072 Replay Protected Memory Block: Not Supported 00:24:13.072 00:24:13.072 Firmware Slot Information 00:24:13.072 ========================= 00:24:13.072 Active slot: 0 00:24:13.072 00:24:13.072 00:24:13.072 Error Log 00:24:13.072 ========= 00:24:13.072 00:24:13.072 Active Namespaces 00:24:13.072 ================= 00:24:13.072 Discovery Log Page 00:24:13.072 ================== 00:24:13.072 Generation Counter: 2 00:24:13.072 Number of Records: 2 00:24:13.072 Record Format: 0 00:24:13.072 00:24:13.072 Discovery Log Entry 0 00:24:13.072 ---------------------- 00:24:13.072 Transport Type: 3 (TCP) 00:24:13.072 Address Family: 1 (IPv4) 00:24:13.072 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:13.072 Entry Flags: 00:24:13.072 Duplicate Returned Information: 1 00:24:13.072 Explicit Persistent Connection Support for Discovery: 1 00:24:13.072 Transport Requirements: 00:24:13.072 Secure Channel: Not Required 00:24:13.072 Port ID: 0 (0x0000) 00:24:13.072 Controller ID: 65535 (0xffff) 00:24:13.072 Admin Max SQ Size: 128 00:24:13.072 Transport Service Identifier: 4420 00:24:13.072 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:13.072 Transport Address: 10.0.0.2 00:24:13.072 Discovery Log Entry 1 00:24:13.072 ---------------------- 00:24:13.072 Transport Type: 3 (TCP) 00:24:13.072 Address Family: 1 (IPv4) 00:24:13.072 Subsystem Type: 2 (NVM Subsystem) 00:24:13.072 Entry Flags: 00:24:13.072 Duplicate Returned Information: 0 00:24:13.072 Explicit Persistent Connection Support for Discovery: 0 00:24:13.072 Transport Requirements: 00:24:13.072 Secure Channel: Not Required 00:24:13.072 Port ID: 0 (0x0000) 00:24:13.072 Controller ID: 65535 (0xffff) 00:24:13.072 Admin Max SQ Size: 128 00:24:13.072 Transport Service Identifier: 4420 00:24:13.072 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:13.072 Transport Address: 10.0.0.2 [2024-11-20 14:44:20.038427] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:13.072 [2024-11-20 14:44:20.038437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d100) on tqpair=0xf2b550 00:24:13.072 [2024-11-20 14:44:20.038444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.072 [2024-11-20 14:44:20.038450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d280) on tqpair=0xf2b550 00:24:13.072 [2024-11-20 14:44:20.038455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.072 [2024-11-20 14:44:20.038460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d400) on tqpair=0xf2b550 00:24:13.072 [2024-11-20 14:44:20.038465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.072 [2024-11-20 14:44:20.038470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.072 [2024-11-20 14:44:20.038474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.072 [2024-11-20 14:44:20.038485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.072 [2024-11-20 14:44:20.038490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.072 [2024-11-20 14:44:20.038493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.038502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.038516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.038570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.038577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.038580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.038584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.038592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.038596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.038599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.038606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.038619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.038730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.038736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.038740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.038743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.038748] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:13.073 [2024-11-20 14:44:20.038753] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:13.073 [2024-11-20 14:44:20.038762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.038766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.038770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.038777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.038787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.038881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.038888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.038891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.038895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.038905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.038909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.038913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.038920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.038930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.039033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.039040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.039043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.039057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.039074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.039085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.039145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.039151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.039155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.039168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.039182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.039193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.039284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.039291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.039295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.039308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.039323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.039333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.039586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.039592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.039596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.039609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.039624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.039634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.039738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.039744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.039748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.039761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.039778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.039789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.039846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.039853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.039856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.039870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.039878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.039884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.039895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.039989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.039995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.039999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.040003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.040013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.040017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.040021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.040027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.040037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.040142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.040148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.073 [2024-11-20 14:44:20.040151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.040155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.073 [2024-11-20 14:44:20.040165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.040169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.073 [2024-11-20 14:44:20.040172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.073 [2024-11-20 14:44:20.040179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.073 [2024-11-20 14:44:20.040189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.073 [2024-11-20 14:44:20.044253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.073 [2024-11-20 14:44:20.044263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.074 [2024-11-20 14:44:20.044267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.074 [2024-11-20 14:44:20.044271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.074 [2024-11-20 14:44:20.044282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.074 [2024-11-20 14:44:20.044286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.074 [2024-11-20 14:44:20.044290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf2b550) 00:24:13.074 [2024-11-20 14:44:20.044296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.074 [2024-11-20 14:44:20.044314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf8d580, cid 3, qid 0 00:24:13.074 [2024-11-20 14:44:20.044494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.074 [2024-11-20 14:44:20.044500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.074 [2024-11-20 14:44:20.044504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.074 [2024-11-20 14:44:20.044508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf8d580) on tqpair=0xf2b550 00:24:13.074 [2024-11-20 14:44:20.044515] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:24:13.074 00:24:13.074 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:13.074 [2024-11-20 14:44:20.069527] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:24:13.074 [2024-11-20 14:44:20.069559] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003119 ] 00:24:13.074 [2024-11-20 14:44:20.122347] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:13.074 [2024-11-20 14:44:20.122397] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:13.074 [2024-11-20 14:44:20.122402] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:13.074 [2024-11-20 14:44:20.122414] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:13.074 [2024-11-20 14:44:20.122424] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:13.074 [2024-11-20 14:44:20.123096] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:13.074 [2024-11-20 14:44:20.123125] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x970550 0 00:24:13.337 [2024-11-20 14:44:20.133264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:13.337 [2024-11-20 14:44:20.133276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:13.337 [2024-11-20 14:44:20.133281] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:13.337 [2024-11-20 14:44:20.133284] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:13.337 [2024-11-20 14:44:20.133311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.133316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.133320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.337 [2024-11-20 14:44:20.133331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:13.337 [2024-11-20 14:44:20.133349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.337 [2024-11-20 14:44:20.141255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.337 [2024-11-20 14:44:20.141264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.337 [2024-11-20 14:44:20.141268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.141272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.337 [2024-11-20 14:44:20.141281] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:13.337 [2024-11-20 14:44:20.141291] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:13.337 [2024-11-20 14:44:20.141297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:13.337 [2024-11-20 14:44:20.141308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.141312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.141316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.337 [2024-11-20 14:44:20.141324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.337 [2024-11-20 14:44:20.141337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.337 [2024-11-20 14:44:20.141559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.337 [2024-11-20 14:44:20.141565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.337 [2024-11-20 14:44:20.141569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.141573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.337 [2024-11-20 14:44:20.141578] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:13.337 [2024-11-20 14:44:20.141585] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:13.337 [2024-11-20 14:44:20.141592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.141596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.141600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.337 [2024-11-20 14:44:20.141606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.337 [2024-11-20 14:44:20.141617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.337 [2024-11-20 14:44:20.141822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.337 [2024-11-20 14:44:20.141828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.337 [2024-11-20 14:44:20.141832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.141836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.337 [2024-11-20 14:44:20.141841] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:13.337 [2024-11-20 14:44:20.141849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:13.337 [2024-11-20 14:44:20.141856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.141860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.141863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.337 [2024-11-20 14:44:20.141870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.337 [2024-11-20 14:44:20.141881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.337 [2024-11-20 14:44:20.142059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.337 [2024-11-20 14:44:20.142065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.337 [2024-11-20 14:44:20.142068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.337 [2024-11-20 14:44:20.142077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:13.337 [2024-11-20 14:44:20.142089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.337 [2024-11-20 14:44:20.142104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.337 [2024-11-20 14:44:20.142114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.337 [2024-11-20 14:44:20.142283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.337 [2024-11-20 14:44:20.142289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.337 [2024-11-20 14:44:20.142293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.337 [2024-11-20 14:44:20.142301] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:13.337 [2024-11-20 14:44:20.142306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:13.337 [2024-11-20 14:44:20.142314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:13.337 [2024-11-20 14:44:20.142422] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:13.337 [2024-11-20 14:44:20.142427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:13.337 [2024-11-20 14:44:20.142434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.337 [2024-11-20 14:44:20.142449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.337 [2024-11-20 14:44:20.142459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.337 [2024-11-20 14:44:20.142674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.337 [2024-11-20 14:44:20.142681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.337 [2024-11-20 14:44:20.142684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.337 [2024-11-20 14:44:20.142693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:13.337 [2024-11-20 14:44:20.142702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.337 [2024-11-20 14:44:20.142716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.337 [2024-11-20 14:44:20.142726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.337 [2024-11-20 14:44:20.142875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.337 [2024-11-20 14:44:20.142881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.337 [2024-11-20 14:44:20.142885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.337 [2024-11-20 14:44:20.142888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.337 [2024-11-20 14:44:20.142893] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:13.337 [2024-11-20 14:44:20.142900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:13.337 [2024-11-20 14:44:20.142908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:13.338 [2024-11-20 14:44:20.142918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.142927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.142931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.338 [2024-11-20 14:44:20.142938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.338 [2024-11-20 14:44:20.142948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.338 [2024-11-20 14:44:20.143161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.338 [2024-11-20 14:44:20.143168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.338 [2024-11-20 14:44:20.143172] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.143176] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x970550): datao=0, datal=4096, cccid=0 00:24:13.338 [2024-11-20 14:44:20.143180] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d2100) on tqpair(0x970550): expected_datao=0, payload_size=4096 00:24:13.338 [2024-11-20 14:44:20.143185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.143198] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.143202] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.338 [2024-11-20 14:44:20.183438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.338 [2024-11-20 14:44:20.183441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.338 [2024-11-20 14:44:20.183453] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:13.338 [2024-11-20 14:44:20.183457] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:13.338 [2024-11-20 14:44:20.183462] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:13.338 [2024-11-20 14:44:20.183471] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:13.338 [2024-11-20 14:44:20.183476] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:13.338 [2024-11-20 14:44:20.183481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.183491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.183498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.338 [2024-11-20 14:44:20.183513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:13.338 [2024-11-20 14:44:20.183525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.338 [2024-11-20 14:44:20.183729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.338 [2024-11-20 14:44:20.183738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.338 [2024-11-20 14:44:20.183741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.338 [2024-11-20 14:44:20.183752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x970550) 00:24:13.338 [2024-11-20 14:44:20.183765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.338 [2024-11-20 14:44:20.183772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x970550) 00:24:13.338 [2024-11-20 14:44:20.183785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.338 [2024-11-20 14:44:20.183791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x970550) 00:24:13.338 [2024-11-20 14:44:20.183804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.338 [2024-11-20 14:44:20.183810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x970550) 00:24:13.338 [2024-11-20 14:44:20.183823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.338 [2024-11-20 14:44:20.183828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.183836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.183843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.183846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x970550) 00:24:13.338 [2024-11-20 14:44:20.183853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.338 [2024-11-20 14:44:20.183865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2100, cid 0, qid 0 00:24:13.338 [2024-11-20 14:44:20.183870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2280, cid 1, qid 0 00:24:13.338 [2024-11-20 14:44:20.183875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2400, cid 2, qid 0 00:24:13.338 [2024-11-20 14:44:20.183880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2580, cid 3, qid 0 00:24:13.338 [2024-11-20 14:44:20.183885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2700, cid 4, qid 0 00:24:13.338 [2024-11-20 14:44:20.184075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.338 [2024-11-20 14:44:20.184081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.338 [2024-11-20 14:44:20.184084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2700) on tqpair=0x970550 00:24:13.338 [2024-11-20 14:44:20.184095] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:13.338 [2024-11-20 14:44:20.184102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.184110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.184116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.184122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x970550) 00:24:13.338 [2024-11-20 14:44:20.184136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:13.338 [2024-11-20 14:44:20.184147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2700, cid 4, qid 0 00:24:13.338 [2024-11-20 14:44:20.184351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.338 [2024-11-20 14:44:20.184358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.338 [2024-11-20 14:44:20.184361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2700) on tqpair=0x970550 00:24:13.338 [2024-11-20 14:44:20.184430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.184439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.184446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x970550) 00:24:13.338 [2024-11-20 14:44:20.184456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.338 [2024-11-20 14:44:20.184467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2700, cid 4, qid 0 00:24:13.338 [2024-11-20 14:44:20.184664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.338 [2024-11-20 14:44:20.184671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.338 [2024-11-20 14:44:20.184674] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184678] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x970550): datao=0, datal=4096, cccid=4 00:24:13.338 [2024-11-20 14:44:20.184682] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d2700) on tqpair(0x970550): expected_datao=0, payload_size=4096 00:24:13.338 [2024-11-20 14:44:20.184687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184693] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184697] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.338 [2024-11-20 14:44:20.184911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.338 [2024-11-20 14:44:20.184915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2700) on tqpair=0x970550 00:24:13.338 [2024-11-20 14:44:20.184927] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:13.338 [2024-11-20 14:44:20.184936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.184944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:13.338 [2024-11-20 14:44:20.184953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.338 [2024-11-20 14:44:20.184957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.184964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.184974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2700, cid 4, qid 0 00:24:13.339 [2024-11-20 14:44:20.185164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.339 [2024-11-20 14:44:20.185171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.339 [2024-11-20 14:44:20.185174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.185178] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x970550): datao=0, datal=4096, cccid=4 00:24:13.339 [2024-11-20 14:44:20.185182] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d2700) on tqpair(0x970550): expected_datao=0, payload_size=4096 00:24:13.339 [2024-11-20 14:44:20.185187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.185193] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.185197] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.339 [2024-11-20 14:44:20.189263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.339 [2024-11-20 14:44:20.189267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2700) on tqpair=0x970550 00:24:13.339 [2024-11-20 14:44:20.189283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:13.339 [2024-11-20 14:44:20.189292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:13.339 [2024-11-20 14:44:20.189300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.189310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.189322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2700, cid 4, qid 0 00:24:13.339 [2024-11-20 14:44:20.189493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.339 [2024-11-20 14:44:20.189500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.339 [2024-11-20 14:44:20.189503] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189507] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x970550): datao=0, datal=4096, cccid=4 00:24:13.339 [2024-11-20 14:44:20.189511] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d2700) on tqpair(0x970550): expected_datao=0, payload_size=4096 00:24:13.339 [2024-11-20 14:44:20.189516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189523] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189526] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.339 [2024-11-20 14:44:20.189708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.339 [2024-11-20 14:44:20.189712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2700) on tqpair=0x970550 00:24:13.339 [2024-11-20 14:44:20.189723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:13.339 [2024-11-20 14:44:20.189733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:13.339 [2024-11-20 14:44:20.189741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:13.339 [2024-11-20 14:44:20.189747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:13.339 [2024-11-20 14:44:20.189752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:13.339 [2024-11-20 14:44:20.189758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:13.339 [2024-11-20 14:44:20.189763] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:13.339 [2024-11-20 14:44:20.189768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:13.339 [2024-11-20 14:44:20.189773] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:13.339 [2024-11-20 14:44:20.189786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.189796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.189803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.189810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.189816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.339 [2024-11-20 14:44:20.189830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2700, cid 4, qid 0 00:24:13.339 [2024-11-20 14:44:20.189835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2880, cid 5, qid 0 00:24:13.339 [2024-11-20 14:44:20.190053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.339 [2024-11-20 14:44:20.190059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.339 [2024-11-20 14:44:20.190063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2700) on tqpair=0x970550 00:24:13.339 [2024-11-20 14:44:20.190073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.339 [2024-11-20 14:44:20.190079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.339 [2024-11-20 14:44:20.190083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2880) on tqpair=0x970550 00:24:13.339 [2024-11-20 14:44:20.190096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.190106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.190116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2880, cid 5, qid 0 00:24:13.339 [2024-11-20 14:44:20.190303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.339 [2024-11-20 14:44:20.190310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.339 [2024-11-20 14:44:20.190313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2880) on tqpair=0x970550 00:24:13.339 [2024-11-20 14:44:20.190328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.190339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.190349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2880, cid 5, qid 0 00:24:13.339 [2024-11-20 14:44:20.190514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.339 [2024-11-20 14:44:20.190520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.339 [2024-11-20 14:44:20.190524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2880) on tqpair=0x970550 00:24:13.339 [2024-11-20 14:44:20.190537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.190547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.190557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2880, cid 5, qid 0 00:24:13.339 [2024-11-20 14:44:20.190807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.339 [2024-11-20 14:44:20.190813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.339 [2024-11-20 14:44:20.190816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2880) on tqpair=0x970550 00:24:13.339 [2024-11-20 14:44:20.190834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.190845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.190852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.190862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.190870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.190880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.190887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.339 [2024-11-20 14:44:20.190891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x970550) 00:24:13.339 [2024-11-20 14:44:20.190897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.339 [2024-11-20 14:44:20.190908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2880, cid 5, qid 0 00:24:13.339 [2024-11-20 14:44:20.190913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2700, cid 4, qid 0 00:24:13.339 [2024-11-20 14:44:20.190918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2a00, cid 6, qid 0 00:24:13.340 [2024-11-20 14:44:20.190923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2b80, cid 7, qid 0 00:24:13.340 [2024-11-20 14:44:20.191142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.340 [2024-11-20 14:44:20.191149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.340 [2024-11-20 14:44:20.191152] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191156] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x970550): datao=0, datal=8192, cccid=5 00:24:13.340 [2024-11-20 14:44:20.191160] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d2880) on tqpair(0x970550): expected_datao=0, payload_size=8192 00:24:13.340 [2024-11-20 14:44:20.191165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191277] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191282] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.340 [2024-11-20 14:44:20.191294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.340 [2024-11-20 14:44:20.191297] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191301] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x970550): datao=0, datal=512, cccid=4 00:24:13.340 [2024-11-20 14:44:20.191305] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d2700) on tqpair(0x970550): expected_datao=0, payload_size=512 00:24:13.340 [2024-11-20 14:44:20.191309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191316] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191320] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.340 [2024-11-20 14:44:20.191331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.340 [2024-11-20 14:44:20.191334] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191338] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x970550): datao=0, datal=512, cccid=6 00:24:13.340 [2024-11-20 14:44:20.191342] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d2a00) on tqpair(0x970550): expected_datao=0, payload_size=512 00:24:13.340 [2024-11-20 14:44:20.191346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191353] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191356] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:13.340 [2024-11-20 14:44:20.191367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:13.340 [2024-11-20 14:44:20.191371] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191374] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x970550): datao=0, datal=4096, cccid=7 00:24:13.340 [2024-11-20 14:44:20.191379] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d2b80) on tqpair(0x970550): expected_datao=0, payload_size=4096 00:24:13.340 [2024-11-20 14:44:20.191383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191398] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191403] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.340 [2024-11-20 14:44:20.191590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.340 [2024-11-20 14:44:20.191593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2880) on tqpair=0x970550 00:24:13.340 [2024-11-20 14:44:20.191611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.340 [2024-11-20 14:44:20.191617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.340 [2024-11-20 14:44:20.191622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2700) on tqpair=0x970550 00:24:13.340 [2024-11-20 14:44:20.191636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.340 [2024-11-20 14:44:20.191642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.340 [2024-11-20 14:44:20.191645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2a00) on tqpair=0x970550 00:24:13.340 [2024-11-20 14:44:20.191656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.340 [2024-11-20 14:44:20.191662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.340 [2024-11-20 14:44:20.191665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.340 [2024-11-20 14:44:20.191669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2b80) on tqpair=0x970550 00:24:13.340 ===================================================== 00:24:13.340 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.340 ===================================================== 00:24:13.340 Controller Capabilities/Features 00:24:13.340 ================================ 00:24:13.340 Vendor ID: 8086 00:24:13.340 Subsystem Vendor ID: 8086 00:24:13.340 Serial Number: SPDK00000000000001 00:24:13.340 Model Number: SPDK bdev Controller 00:24:13.340 Firmware Version: 25.01 00:24:13.340 Recommended Arb Burst: 6 00:24:13.340 IEEE OUI Identifier: e4 d2 5c 00:24:13.340 Multi-path I/O 00:24:13.340 May have multiple subsystem ports: Yes 00:24:13.340 May have multiple controllers: Yes 00:24:13.340 Associated with SR-IOV VF: No 00:24:13.340 Max Data Transfer Size: 131072 00:24:13.340 Max Number of Namespaces: 32 00:24:13.340 Max Number of I/O Queues: 127 00:24:13.340 NVMe Specification Version (VS): 1.3 00:24:13.340 NVMe Specification Version (Identify): 1.3 00:24:13.340 Maximum Queue Entries: 128 00:24:13.340 Contiguous Queues Required: Yes 00:24:13.340 Arbitration Mechanisms Supported 00:24:13.340 Weighted Round Robin: Not Supported 00:24:13.340 Vendor Specific: Not Supported 00:24:13.340 Reset Timeout: 15000 ms 00:24:13.340 Doorbell Stride: 4 bytes 00:24:13.340 NVM Subsystem Reset: Not Supported 00:24:13.340 Command Sets Supported 00:24:13.340 NVM Command Set: Supported 00:24:13.340 Boot Partition: Not Supported 00:24:13.340 Memory Page Size Minimum: 4096 bytes 00:24:13.340 Memory Page Size Maximum: 4096 bytes 00:24:13.340 Persistent Memory Region: Not Supported 00:24:13.340 Optional Asynchronous Events Supported 00:24:13.340 Namespace Attribute Notices: Supported 00:24:13.340 Firmware Activation Notices: Not Supported 00:24:13.340 ANA Change Notices: Not Supported 00:24:13.340 PLE Aggregate Log Change Notices: Not Supported 00:24:13.340 LBA Status Info Alert Notices: Not Supported 00:24:13.340 EGE Aggregate Log Change Notices: Not Supported 00:24:13.340 Normal NVM Subsystem Shutdown event: Not Supported 00:24:13.340 Zone Descriptor Change Notices: Not Supported 00:24:13.340 Discovery Log Change Notices: Not Supported 00:24:13.340 Controller Attributes 00:24:13.340 128-bit Host Identifier: Supported 00:24:13.340 Non-Operational Permissive Mode: Not Supported 00:24:13.340 NVM Sets: Not Supported 00:24:13.340 Read Recovery Levels: Not Supported 00:24:13.340 Endurance Groups: Not Supported 00:24:13.340 Predictable Latency Mode: Not Supported 00:24:13.340 Traffic Based Keep ALive: Not Supported 00:24:13.340 Namespace Granularity: Not Supported 00:24:13.340 SQ Associations: Not Supported 00:24:13.340 UUID List: Not Supported 00:24:13.340 Multi-Domain Subsystem: Not Supported 00:24:13.340 Fixed Capacity Management: Not Supported 00:24:13.340 Variable Capacity Management: Not Supported 00:24:13.340 Delete Endurance Group: Not Supported 00:24:13.340 Delete NVM Set: Not Supported 00:24:13.340 Extended LBA Formats Supported: Not Supported 00:24:13.340 Flexible Data Placement Supported: Not Supported 00:24:13.340 00:24:13.340 Controller Memory Buffer Support 00:24:13.340 ================================ 00:24:13.340 Supported: No 00:24:13.340 00:24:13.340 Persistent Memory Region Support 00:24:13.340 ================================ 00:24:13.340 Supported: No 00:24:13.340 00:24:13.340 Admin Command Set Attributes 00:24:13.340 ============================ 00:24:13.340 Security Send/Receive: Not Supported 00:24:13.340 Format NVM: Not Supported 00:24:13.340 Firmware Activate/Download: Not Supported 00:24:13.340 Namespace Management: Not Supported 00:24:13.340 Device Self-Test: Not Supported 00:24:13.340 Directives: Not Supported 00:24:13.340 NVMe-MI: Not Supported 00:24:13.340 Virtualization Management: Not Supported 00:24:13.340 Doorbell Buffer Config: Not Supported 00:24:13.340 Get LBA Status Capability: Not Supported 00:24:13.340 Command & Feature Lockdown Capability: Not Supported 00:24:13.340 Abort Command Limit: 4 00:24:13.340 Async Event Request Limit: 4 00:24:13.340 Number of Firmware Slots: N/A 00:24:13.340 Firmware Slot 1 Read-Only: N/A 00:24:13.340 Firmware Activation Without Reset: N/A 00:24:13.340 Multiple Update Detection Support: N/A 00:24:13.340 Firmware Update Granularity: No Information Provided 00:24:13.340 Per-Namespace SMART Log: No 00:24:13.340 Asymmetric Namespace Access Log Page: Not Supported 00:24:13.340 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:13.340 Command Effects Log Page: Supported 00:24:13.340 Get Log Page Extended Data: Supported 00:24:13.340 Telemetry Log Pages: Not Supported 00:24:13.340 Persistent Event Log Pages: Not Supported 00:24:13.340 Supported Log Pages Log Page: May Support 00:24:13.340 Commands Supported & Effects Log Page: Not Supported 00:24:13.340 Feature Identifiers & Effects Log Page:May Support 00:24:13.340 NVMe-MI Commands & Effects Log Page: May Support 00:24:13.340 Data Area 4 for Telemetry Log: Not Supported 00:24:13.340 Error Log Page Entries Supported: 128 00:24:13.340 Keep Alive: Supported 00:24:13.341 Keep Alive Granularity: 10000 ms 00:24:13.341 00:24:13.341 NVM Command Set Attributes 00:24:13.341 ========================== 00:24:13.341 Submission Queue Entry Size 00:24:13.341 Max: 64 00:24:13.341 Min: 64 00:24:13.341 Completion Queue Entry Size 00:24:13.341 Max: 16 00:24:13.341 Min: 16 00:24:13.341 Number of Namespaces: 32 00:24:13.341 Compare Command: Supported 00:24:13.341 Write Uncorrectable Command: Not Supported 00:24:13.341 Dataset Management Command: Supported 00:24:13.341 Write Zeroes Command: Supported 00:24:13.341 Set Features Save Field: Not Supported 00:24:13.341 Reservations: Supported 00:24:13.341 Timestamp: Not Supported 00:24:13.341 Copy: Supported 00:24:13.341 Volatile Write Cache: Present 00:24:13.341 Atomic Write Unit (Normal): 1 00:24:13.341 Atomic Write Unit (PFail): 1 00:24:13.341 Atomic Compare & Write Unit: 1 00:24:13.341 Fused Compare & Write: Supported 00:24:13.341 Scatter-Gather List 00:24:13.341 SGL Command Set: Supported 00:24:13.341 SGL Keyed: Supported 00:24:13.341 SGL Bit Bucket Descriptor: Not Supported 00:24:13.341 SGL Metadata Pointer: Not Supported 00:24:13.341 Oversized SGL: Not Supported 00:24:13.341 SGL Metadata Address: Not Supported 00:24:13.341 SGL Offset: Supported 00:24:13.341 Transport SGL Data Block: Not Supported 00:24:13.341 Replay Protected Memory Block: Not Supported 00:24:13.341 00:24:13.341 Firmware Slot Information 00:24:13.341 ========================= 00:24:13.341 Active slot: 1 00:24:13.341 Slot 1 Firmware Revision: 25.01 00:24:13.341 00:24:13.341 00:24:13.341 Commands Supported and Effects 00:24:13.341 ============================== 00:24:13.341 Admin Commands 00:24:13.341 -------------- 00:24:13.341 Get Log Page (02h): Supported 00:24:13.341 Identify (06h): Supported 00:24:13.341 Abort (08h): Supported 00:24:13.341 Set Features (09h): Supported 00:24:13.341 Get Features (0Ah): Supported 00:24:13.341 Asynchronous Event Request (0Ch): Supported 00:24:13.341 Keep Alive (18h): Supported 00:24:13.341 I/O Commands 00:24:13.341 ------------ 00:24:13.341 Flush (00h): Supported LBA-Change 00:24:13.341 Write (01h): Supported LBA-Change 00:24:13.341 Read (02h): Supported 00:24:13.341 Compare (05h): Supported 00:24:13.341 Write Zeroes (08h): Supported LBA-Change 00:24:13.341 Dataset Management (09h): Supported LBA-Change 00:24:13.341 Copy (19h): Supported LBA-Change 00:24:13.341 00:24:13.341 Error Log 00:24:13.341 ========= 00:24:13.341 00:24:13.341 Arbitration 00:24:13.341 =========== 00:24:13.341 Arbitration Burst: 1 00:24:13.341 00:24:13.341 Power Management 00:24:13.341 ================ 00:24:13.341 Number of Power States: 1 00:24:13.341 Current Power State: Power State #0 00:24:13.341 Power State #0: 00:24:13.341 Max Power: 0.00 W 00:24:13.341 Non-Operational State: Operational 00:24:13.341 Entry Latency: Not Reported 00:24:13.341 Exit Latency: Not Reported 00:24:13.341 Relative Read Throughput: 0 00:24:13.341 Relative Read Latency: 0 00:24:13.341 Relative Write Throughput: 0 00:24:13.341 Relative Write Latency: 0 00:24:13.341 Idle Power: Not Reported 00:24:13.341 Active Power: Not Reported 00:24:13.341 Non-Operational Permissive Mode: Not Supported 00:24:13.341 00:24:13.341 Health Information 00:24:13.341 ================== 00:24:13.341 Critical Warnings: 00:24:13.341 Available Spare Space: OK 00:24:13.341 Temperature: OK 00:24:13.341 Device Reliability: OK 00:24:13.341 Read Only: No 00:24:13.341 Volatile Memory Backup: OK 00:24:13.341 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:13.341 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:13.341 Available Spare: 0% 00:24:13.341 Available Spare Threshold: 0% 00:24:13.341 Life Percentage Used:[2024-11-20 14:44:20.191764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.341 [2024-11-20 14:44:20.191769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x970550) 00:24:13.341 [2024-11-20 14:44:20.191776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.341 [2024-11-20 14:44:20.191787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2b80, cid 7, qid 0 00:24:13.341 [2024-11-20 14:44:20.192005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.341 [2024-11-20 14:44:20.192011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.341 [2024-11-20 14:44:20.192015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.341 [2024-11-20 14:44:20.192018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2b80) on tqpair=0x970550 00:24:13.341 [2024-11-20 14:44:20.192048] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:13.341 [2024-11-20 14:44:20.192058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2100) on tqpair=0x970550 00:24:13.341 [2024-11-20 14:44:20.192063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.341 [2024-11-20 14:44:20.192069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2280) on tqpair=0x970550 00:24:13.341 [2024-11-20 14:44:20.192073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.341 [2024-11-20 14:44:20.192078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2400) on tqpair=0x970550 00:24:13.341 [2024-11-20 14:44:20.192083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.341 [2024-11-20 14:44:20.192088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2580) on tqpair=0x970550 00:24:13.341 [2024-11-20 14:44:20.192092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.341 [2024-11-20 14:44:20.192101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.341 [2024-11-20 14:44:20.192105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.341 [2024-11-20 14:44:20.192108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x970550) 00:24:13.341 [2024-11-20 14:44:20.192115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.341 [2024-11-20 14:44:20.192127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2580, cid 3, qid 0 00:24:13.341 [2024-11-20 14:44:20.192306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.341 [2024-11-20 14:44:20.192313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.341 [2024-11-20 14:44:20.192316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.341 [2024-11-20 14:44:20.192323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2580) on tqpair=0x970550 00:24:13.341 [2024-11-20 14:44:20.192329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.341 [2024-11-20 14:44:20.192333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.341 [2024-11-20 14:44:20.192337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x970550) 00:24:13.341 [2024-11-20 14:44:20.192344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.341 [2024-11-20 14:44:20.192357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2580, cid 3, qid 0 00:24:13.341 [2024-11-20 14:44:20.192606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.341 [2024-11-20 14:44:20.192612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.341 [2024-11-20 14:44:20.192616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.341 [2024-11-20 14:44:20.192620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2580) on tqpair=0x970550 00:24:13.341 [2024-11-20 14:44:20.192624] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:13.341 [2024-11-20 14:44:20.192629] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:13.341 [2024-11-20 14:44:20.192638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.341 [2024-11-20 14:44:20.192642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.342 [2024-11-20 14:44:20.192646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x970550) 00:24:13.342 [2024-11-20 14:44:20.192653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.342 [2024-11-20 14:44:20.192663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2580, cid 3, qid 0 00:24:13.342 [2024-11-20 14:44:20.192856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.342 [2024-11-20 14:44:20.192862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.342 [2024-11-20 14:44:20.192865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.342 [2024-11-20 14:44:20.192869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2580) on tqpair=0x970550 00:24:13.342 [2024-11-20 14:44:20.192879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.342 [2024-11-20 14:44:20.192883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.342 [2024-11-20 14:44:20.192886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x970550) 00:24:13.342 [2024-11-20 14:44:20.192893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.342 [2024-11-20 14:44:20.192903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2580, cid 3, qid 0 00:24:13.342 [2024-11-20 14:44:20.193111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.342 [2024-11-20 14:44:20.193117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.342 [2024-11-20 14:44:20.193120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.342 [2024-11-20 14:44:20.193124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2580) on tqpair=0x970550 00:24:13.342 [2024-11-20 14:44:20.193134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:13.342 [2024-11-20 14:44:20.193138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:13.342 [2024-11-20 14:44:20.193141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x970550) 00:24:13.342 [2024-11-20 14:44:20.193148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.342 [2024-11-20 14:44:20.193158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d2580, cid 3, qid 0 00:24:13.342 [2024-11-20 14:44:20.197253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:13.342 [2024-11-20 14:44:20.197261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:13.342 [2024-11-20 14:44:20.197269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:13.342 [2024-11-20 14:44:20.197273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d2580) on tqpair=0x970550 00:24:13.342 [2024-11-20 14:44:20.197281] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:13.342 0% 00:24:13.342 Data Units Read: 0 00:24:13.342 Data Units Written: 0 00:24:13.342 Host Read Commands: 0 00:24:13.342 Host Write Commands: 0 00:24:13.342 Controller Busy Time: 0 minutes 00:24:13.342 Power Cycles: 0 00:24:13.342 Power On Hours: 0 hours 00:24:13.342 Unsafe Shutdowns: 0 00:24:13.342 Unrecoverable Media Errors: 0 00:24:13.342 Lifetime Error Log Entries: 0 00:24:13.342 Warning Temperature Time: 0 minutes 00:24:13.342 Critical Temperature Time: 0 minutes 00:24:13.342 00:24:13.342 Number of Queues 00:24:13.342 ================ 00:24:13.342 Number of I/O Submission Queues: 127 00:24:13.342 Number of I/O Completion Queues: 127 00:24:13.342 00:24:13.342 Active Namespaces 00:24:13.342 ================= 00:24:13.342 Namespace ID:1 00:24:13.342 Error Recovery Timeout: Unlimited 00:24:13.342 Command Set Identifier: NVM (00h) 00:24:13.342 Deallocate: Supported 00:24:13.342 Deallocated/Unwritten Error: Not Supported 00:24:13.342 Deallocated Read Value: Unknown 00:24:13.342 Deallocate in Write Zeroes: Not Supported 00:24:13.342 Deallocated Guard Field: 0xFFFF 00:24:13.342 Flush: Supported 00:24:13.342 Reservation: Supported 00:24:13.342 Namespace Sharing Capabilities: Multiple Controllers 00:24:13.342 Size (in LBAs): 131072 (0GiB) 00:24:13.342 Capacity (in LBAs): 131072 (0GiB) 00:24:13.342 Utilization (in LBAs): 131072 (0GiB) 00:24:13.342 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:13.342 EUI64: ABCDEF0123456789 00:24:13.342 UUID: abdec078-a2ab-4297-9c5e-7657f4bfb103 00:24:13.342 Thin Provisioning: Not Supported 00:24:13.342 Per-NS Atomic Units: Yes 00:24:13.342 Atomic Boundary Size (Normal): 0 00:24:13.342 Atomic Boundary Size (PFail): 0 00:24:13.342 Atomic Boundary Offset: 0 00:24:13.342 Maximum Single Source Range Length: 65535 00:24:13.342 Maximum Copy Length: 65535 00:24:13.342 Maximum Source Range Count: 1 00:24:13.342 NGUID/EUI64 Never Reused: No 00:24:13.342 Namespace Write Protected: No 00:24:13.342 Number of LBA Formats: 1 00:24:13.342 Current LBA Format: LBA Format #00 00:24:13.342 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:13.342 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.342 rmmod nvme_tcp 00:24:13.342 rmmod nvme_fabrics 00:24:13.342 rmmod nvme_keyring 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 4002779 ']' 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 4002779 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 4002779 ']' 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 4002779 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4002779 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4002779' 00:24:13.342 killing process with pid 4002779 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 4002779 00:24:13.342 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 4002779 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.602 14:44:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.506 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:15.506 00:24:15.506 real 0m9.158s 00:24:15.506 user 0m6.955s 00:24:15.506 sys 0m4.464s 00:24:15.506 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.506 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.506 ************************************ 00:24:15.506 END TEST nvmf_identify 00:24:15.506 ************************************ 00:24:15.506 14:44:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:15.506 14:44:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:15.506 14:44:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.506 14:44:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.766 ************************************ 00:24:15.766 START TEST nvmf_perf 00:24:15.766 ************************************ 00:24:15.766 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:15.766 * Looking for test storage... 00:24:15.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.766 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:15.766 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:15.766 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:15.766 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:15.766 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.766 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.766 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:15.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.767 --rc genhtml_branch_coverage=1 00:24:15.767 --rc genhtml_function_coverage=1 00:24:15.767 --rc genhtml_legend=1 00:24:15.767 --rc geninfo_all_blocks=1 00:24:15.767 --rc geninfo_unexecuted_blocks=1 00:24:15.767 00:24:15.767 ' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:15.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.767 --rc genhtml_branch_coverage=1 00:24:15.767 --rc genhtml_function_coverage=1 00:24:15.767 --rc genhtml_legend=1 00:24:15.767 --rc geninfo_all_blocks=1 00:24:15.767 --rc geninfo_unexecuted_blocks=1 00:24:15.767 00:24:15.767 ' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:15.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.767 --rc genhtml_branch_coverage=1 00:24:15.767 --rc genhtml_function_coverage=1 00:24:15.767 --rc genhtml_legend=1 00:24:15.767 --rc geninfo_all_blocks=1 00:24:15.767 --rc geninfo_unexecuted_blocks=1 00:24:15.767 00:24:15.767 ' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:15.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.767 --rc genhtml_branch_coverage=1 00:24:15.767 --rc genhtml_function_coverage=1 00:24:15.767 --rc genhtml_legend=1 00:24:15.767 --rc geninfo_all_blocks=1 00:24:15.767 --rc geninfo_unexecuted_blocks=1 00:24:15.767 00:24:15.767 ' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:15.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.767 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:15.768 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:15.768 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.768 14:44:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:21.044 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:21.044 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:21.044 Found net devices under 0000:31:00.0: cvl_0_0 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:21.044 Found net devices under 0000:31:00.1: cvl_0_1 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.044 14:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:21.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:24:21.305 00:24:21.305 --- 10.0.0.2 ping statistics --- 00:24:21.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.305 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:24:21.305 00:24:21.305 --- 10.0.0.1 ping statistics --- 00:24:21.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.305 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=4007461 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 4007461 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 4007461 ']' 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.305 14:44:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.305 [2024-11-20 14:44:28.317231] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:24:21.305 [2024-11-20 14:44:28.317308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.566 [2024-11-20 14:44:28.408738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.566 [2024-11-20 14:44:28.462355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.566 [2024-11-20 14:44:28.462412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.566 [2024-11-20 14:44:28.462420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.566 [2024-11-20 14:44:28.462428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.566 [2024-11-20 14:44:28.462435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.566 [2024-11-20 14:44:28.464476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.566 [2024-11-20 14:44:28.464639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.566 [2024-11-20 14:44:28.464799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.566 [2024-11-20 14:44:28.464800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.137 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.137 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:22.137 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.137 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.137 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:22.137 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.137 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:22.137 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:22.705 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:22.705 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:22.964 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:22.964 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:22.964 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:22.964 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:22.964 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:22.964 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:22.964 14:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:23.223 [2024-11-20 14:44:30.126603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.223 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.483 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:23.483 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.483 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:23.483 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:23.743 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.743 [2024-11-20 14:44:30.766322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.743 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:24.003 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:24.003 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:24.003 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:24.003 14:44:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:25.382 Initializing NVMe Controllers 00:24:25.382 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:25.382 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:25.382 Initialization complete. Launching workers. 00:24:25.382 ======================================================== 00:24:25.382 Latency(us) 00:24:25.382 Device Information : IOPS MiB/s Average min max 00:24:25.382 PCIE (0000:65:00.0) NSID 1 from core 0: 96573.89 377.24 330.93 49.28 5145.01 00:24:25.382 ======================================================== 00:24:25.382 Total : 96573.89 377.24 330.93 49.28 5145.01 00:24:25.382 00:24:25.382 14:44:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.762 Initializing NVMe Controllers 00:24:26.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.762 Initialization complete. Launching workers. 00:24:26.762 ======================================================== 00:24:26.762 Latency(us) 00:24:26.762 Device Information : IOPS MiB/s Average min max 00:24:26.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.00 0.32 12443.96 247.66 45974.82 00:24:26.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.00 0.23 17613.31 7964.72 47901.20 00:24:26.762 ======================================================== 00:24:26.762 Total : 141.00 0.55 14570.36 247.66 47901.20 00:24:26.762 00:24:26.762 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:28.209 Initializing NVMe Controllers 00:24:28.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:28.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:28.209 Initialization complete. Launching workers. 00:24:28.209 ======================================================== 00:24:28.209 Latency(us) 00:24:28.209 Device Information : IOPS MiB/s Average min max 00:24:28.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12097.61 47.26 2645.66 483.49 6458.27 00:24:28.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3820.88 14.93 8428.31 5385.89 16870.70 00:24:28.209 ======================================================== 00:24:28.209 Total : 15918.49 62.18 4033.66 483.49 16870.70 00:24:28.209 00:24:28.209 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:28.209 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:28.209 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.747 Initializing NVMe Controllers 00:24:30.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:30.747 Controller IO queue size 128, less than required. 00:24:30.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:30.747 Controller IO queue size 128, less than required. 00:24:30.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:30.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:30.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:30.747 Initialization complete. Launching workers. 00:24:30.747 ======================================================== 00:24:30.747 Latency(us) 00:24:30.747 Device Information : IOPS MiB/s Average min max 00:24:30.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1958.97 489.74 65814.08 33187.68 120124.42 00:24:30.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.99 153.75 219156.07 79093.06 330356.28 00:24:30.747 ======================================================== 00:24:30.747 Total : 2573.96 643.49 102451.73 33187.68 330356.28 00:24:30.747 00:24:30.747 14:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:30.747 No valid NVMe controllers or AIO or URING devices found 00:24:30.747 Initializing NVMe Controllers 00:24:30.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:30.747 Controller IO queue size 128, less than required. 00:24:30.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:30.748 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:30.748 Controller IO queue size 128, less than required. 00:24:30.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:30.748 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:30.748 WARNING: Some requested NVMe devices were skipped 00:24:30.748 14:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:33.286 Initializing NVMe Controllers 00:24:33.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.286 Controller IO queue size 128, less than required. 00:24:33.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:33.286 Controller IO queue size 128, less than required. 00:24:33.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:33.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:33.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:33.286 Initialization complete. Launching workers. 00:24:33.286 00:24:33.286 ==================== 00:24:33.286 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:33.286 TCP transport: 00:24:33.286 polls: 36565 00:24:33.286 idle_polls: 18874 00:24:33.286 sock_completions: 17691 00:24:33.286 nvme_completions: 9199 00:24:33.286 submitted_requests: 13862 00:24:33.286 queued_requests: 1 00:24:33.286 00:24:33.286 ==================== 00:24:33.286 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:33.286 TCP transport: 00:24:33.286 polls: 42637 00:24:33.286 idle_polls: 28695 00:24:33.286 sock_completions: 13942 00:24:33.286 nvme_completions: 6471 00:24:33.286 submitted_requests: 9768 00:24:33.286 queued_requests: 1 00:24:33.286 ======================================================== 00:24:33.286 Latency(us) 00:24:33.286 Device Information : IOPS MiB/s Average min max 00:24:33.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2298.00 574.50 56567.03 38182.89 111330.50 00:24:33.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1616.44 404.11 80088.18 37548.13 122139.63 00:24:33.286 ======================================================== 00:24:33.286 Total : 3914.44 978.61 66279.94 37548.13 122139.63 00:24:33.286 00:24:33.286 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:33.286 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:33.545 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:33.545 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:33.545 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:33.545 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.545 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:33.545 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.546 rmmod nvme_tcp 00:24:33.546 rmmod nvme_fabrics 00:24:33.546 rmmod nvme_keyring 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 4007461 ']' 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 4007461 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 4007461 ']' 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 4007461 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4007461 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4007461' 00:24:33.546 killing process with pid 4007461 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 4007461 00:24:33.546 14:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 4007461 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.453 14:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:37.991 00:24:37.991 real 0m21.950s 00:24:37.991 user 0m56.306s 00:24:37.991 sys 0m6.964s 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.991 ************************************ 00:24:37.991 END TEST nvmf_perf 00:24:37.991 ************************************ 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.991 ************************************ 00:24:37.991 START TEST nvmf_fio_host 00:24:37.991 ************************************ 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:37.991 * Looking for test storage... 00:24:37.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.991 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:37.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.992 --rc genhtml_branch_coverage=1 00:24:37.992 --rc genhtml_function_coverage=1 00:24:37.992 --rc genhtml_legend=1 00:24:37.992 --rc geninfo_all_blocks=1 00:24:37.992 --rc geninfo_unexecuted_blocks=1 00:24:37.992 00:24:37.992 ' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:37.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.992 --rc genhtml_branch_coverage=1 00:24:37.992 --rc genhtml_function_coverage=1 00:24:37.992 --rc genhtml_legend=1 00:24:37.992 --rc geninfo_all_blocks=1 00:24:37.992 --rc geninfo_unexecuted_blocks=1 00:24:37.992 00:24:37.992 ' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:37.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.992 --rc genhtml_branch_coverage=1 00:24:37.992 --rc genhtml_function_coverage=1 00:24:37.992 --rc genhtml_legend=1 00:24:37.992 --rc geninfo_all_blocks=1 00:24:37.992 --rc geninfo_unexecuted_blocks=1 00:24:37.992 00:24:37.992 ' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:37.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.992 --rc genhtml_branch_coverage=1 00:24:37.992 --rc genhtml_function_coverage=1 00:24:37.992 --rc genhtml_legend=1 00:24:37.992 --rc geninfo_all_blocks=1 00:24:37.992 --rc geninfo_unexecuted_blocks=1 00:24:37.992 00:24:37.992 ' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.992 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.993 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.993 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.993 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.993 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.993 14:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:43.274 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:43.274 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.274 14:44:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.274 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:43.275 Found net devices under 0000:31:00.0: cvl_0_0 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:43.275 Found net devices under 0000:31:00.1: cvl_0_1 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:24:43.275 00:24:43.275 --- 10.0.0.2 ping statistics --- 00:24:43.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.275 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:24:43.275 00:24:43.275 --- 10.0.0.1 ping statistics --- 00:24:43.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.275 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4014856 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4014856 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 4014856 ']' 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.275 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.275 [2024-11-20 14:44:50.317144] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:24:43.275 [2024-11-20 14:44:50.317204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.543 [2024-11-20 14:44:50.391573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.543 [2024-11-20 14:44:50.425085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.543 [2024-11-20 14:44:50.425114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.543 [2024-11-20 14:44:50.425120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.543 [2024-11-20 14:44:50.425125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.543 [2024-11-20 14:44:50.425129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.543 [2024-11-20 14:44:50.426690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.543 [2024-11-20 14:44:50.426847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.543 [2024-11-20 14:44:50.426999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.543 [2024-11-20 14:44:50.427000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.543 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.543 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:43.543 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:43.803 [2024-11-20 14:44:50.645497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.803 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:43.803 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.803 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.803 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:43.803 Malloc1 00:24:44.065 14:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.065 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:44.324 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.324 [2024-11-20 14:44:51.335346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.324 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:44.585 14:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.845 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:44.845 fio-3.35 00:24:44.845 Starting 1 thread 00:24:47.523 00:24:47.523 test: (groupid=0, jobs=1): err= 0: pid=4015388: Wed Nov 20 14:44:54 2024 00:24:47.523 read: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(109MiB/2005msec) 00:24:47.523 slat (nsec): min=1396, max=101343, avg=1787.62, stdev=924.06 00:24:47.523 clat (usec): min=1681, max=9117, avg=5098.51, stdev=347.79 00:24:47.523 lat (usec): min=1697, max=9118, avg=5100.30, stdev=347.73 00:24:47.523 clat percentiles (usec): 00:24:47.523 | 1.00th=[ 4359], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:47.523 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:24:47.523 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5669], 00:24:47.523 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6980], 99.95th=[ 8291], 00:24:47.523 | 99.99th=[ 8848] 00:24:47.523 bw ( KiB/s): min=54656, max=55872, per=99.99%, avg=55504.00, stdev=569.48, samples=4 00:24:47.523 iops : min=13664, max=13968, avg=13876.00, stdev=142.37, samples=4 00:24:47.523 write: IOPS=13.9k, BW=54.2MiB/s (56.9MB/s)(109MiB/2005msec); 0 zone resets 00:24:47.523 slat (nsec): min=1423, max=98885, avg=1847.11, stdev=724.31 00:24:47.523 clat (usec): min=996, max=8378, avg=4090.67, stdev=299.61 00:24:47.523 lat (usec): min=1003, max=8379, avg=4092.52, stdev=299.58 00:24:47.523 clat percentiles (usec): 00:24:47.523 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:24:47.523 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:24:47.523 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:24:47.523 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5932], 99.95th=[ 7898], 00:24:47.523 | 99.99th=[ 8356] 00:24:47.523 bw ( KiB/s): min=55088, max=55768, per=100.00%, avg=55556.00, stdev=318.30, samples=4 00:24:47.523 iops : min=13772, max=13942, avg=13889.00, stdev=79.57, samples=4 00:24:47.523 lat (usec) : 1000=0.01% 00:24:47.523 lat (msec) : 2=0.04%, 4=18.31%, 10=81.65% 00:24:47.523 cpu : usr=73.35%, sys=25.65%, ctx=35, majf=0, minf=17 00:24:47.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:47.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:47.523 issued rwts: total=27824,27833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:47.523 00:24:47.523 Run status group 0 (all jobs): 00:24:47.523 READ: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2005-2005msec 00:24:47.523 WRITE: bw=54.2MiB/s (56.9MB/s), 54.2MiB/s-54.2MiB/s (56.9MB/s-56.9MB/s), io=109MiB (114MB), run=2005-2005msec 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:47.523 14:44:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:47.523 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:47.523 fio-3.35 00:24:47.523 Starting 1 thread 00:24:50.056 00:24:50.056 test: (groupid=0, jobs=1): err= 0: pid=4016205: Wed Nov 20 14:44:56 2024 00:24:50.056 read: IOPS=10.6k, BW=165MiB/s (173MB/s)(338MiB/2046msec) 00:24:50.056 slat (usec): min=2, max=110, avg= 3.04, stdev= 1.34 00:24:50.056 clat (usec): min=1322, max=50624, avg=7339.90, stdev=3045.00 00:24:50.056 lat (usec): min=1325, max=50627, avg=7342.95, stdev=3045.18 00:24:50.056 clat percentiles (usec): 00:24:50.056 | 1.00th=[ 3458], 5.00th=[ 4178], 10.00th=[ 4686], 20.00th=[ 5342], 00:24:50.056 | 30.00th=[ 5932], 40.00th=[ 6456], 50.00th=[ 7046], 60.00th=[ 7701], 00:24:50.056 | 70.00th=[ 8291], 80.00th=[ 9110], 90.00th=[10159], 95.00th=[10814], 00:24:50.056 | 99.00th=[12518], 99.50th=[13304], 99.90th=[49021], 99.95th=[49546], 00:24:50.056 | 99.99th=[50594] 00:24:50.056 bw ( KiB/s): min=64928, max=102752, per=51.78%, avg=87648.00, stdev=16055.46, samples=4 00:24:50.056 iops : min= 4058, max= 6422, avg=5478.00, stdev=1003.47, samples=4 00:24:50.056 write: IOPS=6207, BW=97.0MiB/s (102MB/s)(179MiB/1845msec); 0 zone resets 00:24:50.056 slat (usec): min=27, max=151, avg=34.32, stdev= 6.62 00:24:50.056 clat (usec): min=2505, max=51724, avg=8218.19, stdev=3377.69 00:24:50.056 lat (usec): min=2533, max=51752, avg=8252.51, stdev=3378.93 00:24:50.056 clat percentiles (usec): 00:24:50.057 | 1.00th=[ 4948], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6718], 00:24:50.057 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8291], 00:24:50.057 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[10552], 00:24:50.057 | 99.00th=[12256], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:24:50.057 | 99.99th=[51643] 00:24:50.057 bw ( KiB/s): min=68704, max=106624, per=91.77%, avg=91144.00, stdev=16002.33, samples=4 00:24:50.057 iops : min= 4294, max= 6664, avg=5696.50, stdev=1000.15, samples=4 00:24:50.057 lat (msec) : 2=0.05%, 4=2.43%, 10=86.54%, 20=10.60%, 50=0.28% 00:24:50.057 lat (msec) : 100=0.10% 00:24:50.057 cpu : usr=84.55%, sys=13.59%, ctx=22, majf=0, minf=35 00:24:50.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:50.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:50.057 issued rwts: total=21644,11453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.057 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:50.057 00:24:50.057 Run status group 0 (all jobs): 00:24:50.057 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=338MiB (355MB), run=2046-2046msec 00:24:50.057 WRITE: bw=97.0MiB/s (102MB/s), 97.0MiB/s-97.0MiB/s (102MB/s-102MB/s), io=179MiB (188MB), run=1845-1845msec 00:24:50.057 14:44:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.316 rmmod nvme_tcp 00:24:50.316 rmmod nvme_fabrics 00:24:50.316 rmmod nvme_keyring 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 4014856 ']' 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 4014856 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 4014856 ']' 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 4014856 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4014856 00:24:50.316 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.317 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.317 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4014856' 00:24:50.317 killing process with pid 4014856 00:24:50.317 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 4014856 00:24:50.317 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 4014856 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.576 14:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:52.481 00:24:52.481 real 0m14.856s 00:24:52.481 user 0m57.607s 00:24:52.481 sys 0m6.038s 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.481 ************************************ 00:24:52.481 END TEST nvmf_fio_host 00:24:52.481 ************************************ 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.481 ************************************ 00:24:52.481 START TEST nvmf_failover 00:24:52.481 ************************************ 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:52.481 * Looking for test storage... 00:24:52.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:52.481 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:52.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.742 --rc genhtml_branch_coverage=1 00:24:52.742 --rc genhtml_function_coverage=1 00:24:52.742 --rc genhtml_legend=1 00:24:52.742 --rc geninfo_all_blocks=1 00:24:52.742 --rc geninfo_unexecuted_blocks=1 00:24:52.742 00:24:52.742 ' 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:52.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.742 --rc genhtml_branch_coverage=1 00:24:52.742 --rc genhtml_function_coverage=1 00:24:52.742 --rc genhtml_legend=1 00:24:52.742 --rc geninfo_all_blocks=1 00:24:52.742 --rc geninfo_unexecuted_blocks=1 00:24:52.742 00:24:52.742 ' 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:52.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.742 --rc genhtml_branch_coverage=1 00:24:52.742 --rc genhtml_function_coverage=1 00:24:52.742 --rc genhtml_legend=1 00:24:52.742 --rc geninfo_all_blocks=1 00:24:52.742 --rc geninfo_unexecuted_blocks=1 00:24:52.742 00:24:52.742 ' 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:52.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.742 --rc genhtml_branch_coverage=1 00:24:52.742 --rc genhtml_function_coverage=1 00:24:52.742 --rc genhtml_legend=1 00:24:52.742 --rc geninfo_all_blocks=1 00:24:52.742 --rc geninfo_unexecuted_blocks=1 00:24:52.742 00:24:52.742 ' 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.742 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.743 14:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:58.017 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.017 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:58.018 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:58.018 Found net devices under 0000:31:00.0: cvl_0_0 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:58.018 Found net devices under 0000:31:00.1: cvl_0_1 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:24:58.018 00:24:58.018 --- 10.0.0.2 ping statistics --- 00:24:58.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.018 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:24:58.018 00:24:58.018 --- 10.0.0.1 ping statistics --- 00:24:58.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.018 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=4021305 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 4021305 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4021305 ']' 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.018 14:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.018 [2024-11-20 14:45:04.984843] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:24:58.018 [2024-11-20 14:45:04.984893] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.018 [2024-11-20 14:45:05.055850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:58.277 [2024-11-20 14:45:05.085042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.277 [2024-11-20 14:45:05.085068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.277 [2024-11-20 14:45:05.085078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.277 [2024-11-20 14:45:05.085082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.277 [2024-11-20 14:45:05.085087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.277 [2024-11-20 14:45:05.086166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.277 [2024-11-20 14:45:05.086300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.277 [2024-11-20 14:45:05.086303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.277 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.277 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:58.277 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.277 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.277 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.277 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.277 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:58.277 [2024-11-20 14:45:05.326393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.536 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:58.536 Malloc0 00:24:58.536 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.794 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.794 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.054 [2024-11-20 14:45:05.969797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.054 14:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:59.313 [2024-11-20 14:45:06.130220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:59.313 [2024-11-20 14:45:06.290652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4021621 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4021621 /var/tmp/bdevperf.sock 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4021621 ']' 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.313 14:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:00.253 14:45:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.253 14:45:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:00.253 14:45:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.513 NVMe0n1 00:25:00.513 14:45:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.773 00:25:00.773 14:45:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4022228 00:25:00.773 14:45:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.773 14:45:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:01.713 14:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.972 [2024-11-20 14:45:08.879392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 [2024-11-20 14:45:08.879520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7b370 is same with the state(6) to be set 00:25:01.972 14:45:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:05.270 14:45:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:05.270 00:25:05.270 14:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:05.270 [2024-11-20 14:45:12.285863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.285995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 [2024-11-20 14:45:12.286126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7c1e0 is same with the state(6) to be set 00:25:05.270 14:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:08.561 14:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.561 [2024-11-20 14:45:15.455191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.561 14:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:09.498 14:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:09.757 14:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4022228 00:25:16.339 { 00:25:16.339 "results": [ 00:25:16.339 { 00:25:16.339 "job": "NVMe0n1", 00:25:16.339 "core_mask": "0x1", 00:25:16.339 "workload": "verify", 00:25:16.339 "status": "finished", 00:25:16.339 "verify_range": { 00:25:16.339 "start": 0, 00:25:16.339 "length": 16384 00:25:16.339 }, 00:25:16.339 "queue_depth": 128, 00:25:16.339 "io_size": 4096, 00:25:16.339 "runtime": 15.005766, 00:25:16.339 "iops": 12572.833669404148, 00:25:16.339 "mibps": 49.112631521109954, 00:25:16.339 "io_failed": 13373, 00:25:16.339 "io_timeout": 0, 00:25:16.339 "avg_latency_us": 9487.001016970406, 00:25:16.339 "min_latency_us": 539.3066666666666, 00:25:16.339 "max_latency_us": 12561.066666666668 00:25:16.339 } 00:25:16.339 ], 00:25:16.339 "core_count": 1 00:25:16.339 } 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4021621 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4021621 ']' 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4021621 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4021621 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4021621' 00:25:16.339 killing process with pid 4021621 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4021621 00:25:16.339 14:45:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4021621 00:25:16.339 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.339 [2024-11-20 14:45:06.342026] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:25:16.339 [2024-11-20 14:45:06.342084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021621 ] 00:25:16.339 [2024-11-20 14:45:06.420242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.339 [2024-11-20 14:45:06.456097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.339 Running I/O for 15 seconds... 00:25:16.339 11204.00 IOPS, 43.77 MiB/s [2024-11-20T13:45:23.399Z] [2024-11-20 14:45:08.880725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.339 [2024-11-20 14:45:08.880759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.339 [2024-11-20 14:45:08.880775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.339 [2024-11-20 14:45:08.880784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.339 [2024-11-20 14:45:08.880794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.339 [2024-11-20 14:45:08.880802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.339 [2024-11-20 14:45:08.880812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.339 [2024-11-20 14:45:08.880820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.339 [2024-11-20 14:45:08.880829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.339 [2024-11-20 14:45:08.880837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.339 [2024-11-20 14:45:08.880846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.339 [2024-11-20 14:45:08.880854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.339 [2024-11-20 14:45:08.880864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.339 [2024-11-20 14:45:08.880872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.880881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.880888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.880898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.880906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.880915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.880922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.880932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.880939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.880954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.880962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.880972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.880979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.880988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.880995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.340 [2024-11-20 14:45:08.881185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.340 [2024-11-20 14:45:08.881565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.340 [2024-11-20 14:45:08.881572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.341 [2024-11-20 14:45:08.881679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.881990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.881998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.341 [2024-11-20 14:45:08.882256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.341 [2024-11-20 14:45:08.882264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.342 [2024-11-20 14:45:08.882651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.342 [2024-11-20 14:45:08.882948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.342 [2024-11-20 14:45:08.882958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:08.882968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:08.882977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:08.882985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:08.882995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:08.883003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:08.883022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.343 [2024-11-20 14:45:08.883030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.343 [2024-11-20 14:45:08.883036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:25:16.343 [2024-11-20 14:45:08.883045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:08.883086] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:16.343 [2024-11-20 14:45:08.883108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.343 [2024-11-20 14:45:08.883117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:08.883125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.343 [2024-11-20 14:45:08.883133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:08.883141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.343 [2024-11-20 14:45:08.883149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:08.883158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.343 [2024-11-20 14:45:08.883165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:08.883174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:16.343 [2024-11-20 14:45:08.886778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:16.343 [2024-11-20 14:45:08.886803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1396d80 (9): Bad file descriptor 00:25:16.343 [2024-11-20 14:45:08.924312] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:16.343 11591.00 IOPS, 45.28 MiB/s [2024-11-20T13:45:23.403Z] 12042.67 IOPS, 47.04 MiB/s [2024-11-20T13:45:23.403Z] 12249.00 IOPS, 47.85 MiB/s [2024-11-20T13:45:23.403Z] [2024-11-20 14:45:12.286342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.343 [2024-11-20 14:45:12.286702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.343 [2024-11-20 14:45:12.286708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.286989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.286994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.287000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.287007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.287014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.287019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.287025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.287030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.287037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.287042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.287049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.287053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.287061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.287066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.287072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.287078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.287084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.344 [2024-11-20 14:45:12.287089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.344 [2024-11-20 14:45:12.287096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.345 [2024-11-20 14:45:12.287563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.345 [2024-11-20 14:45:12.287569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:12.287900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.346 [2024-11-20 14:45:12.287921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.346 [2024-11-20 14:45:12.287925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60440 len:8 PRP1 0x0 PRP2 0x0 00:25:16.346 [2024-11-20 14:45:12.287934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287966] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:16.346 [2024-11-20 14:45:12.287983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.346 [2024-11-20 14:45:12.287989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.287995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.346 [2024-11-20 14:45:12.288000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.288006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.346 [2024-11-20 14:45:12.288011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.288017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.346 [2024-11-20 14:45:12.288023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:12.288028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:16.346 [2024-11-20 14:45:12.288047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1396d80 (9): Bad file descriptor 00:25:16.346 [2024-11-20 14:45:12.290479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:16.346 [2024-11-20 14:45:12.354385] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:16.346 12209.80 IOPS, 47.69 MiB/s [2024-11-20T13:45:23.406Z] 12330.83 IOPS, 48.17 MiB/s [2024-11-20T13:45:23.406Z] 12406.57 IOPS, 48.46 MiB/s [2024-11-20T13:45:23.406Z] 12477.00 IOPS, 48.74 MiB/s [2024-11-20T13:45:23.406Z] [2024-11-20 14:45:16.617466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:16.617502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:16.617515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:16.617521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:16.617528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.346 [2024-11-20 14:45:16.617533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.346 [2024-11-20 14:45:16.617540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.347 [2024-11-20 14:45:16.617632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.347 [2024-11-20 14:45:16.617644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.347 [2024-11-20 14:45:16.617656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.347 [2024-11-20 14:45:16.617668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.347 [2024-11-20 14:45:16.617680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.347 [2024-11-20 14:45:16.617692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.347 [2024-11-20 14:45:16.617704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.617991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.617996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.618003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.618008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.618014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.618019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.347 [2024-11-20 14:45:16.618026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.347 [2024-11-20 14:45:16.618032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.348 [2024-11-20 14:45:16.618390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.348 [2024-11-20 14:45:16.618402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.348 [2024-11-20 14:45:16.618414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.348 [2024-11-20 14:45:16.618427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.348 [2024-11-20 14:45:16.618439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.348 [2024-11-20 14:45:16.618445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.348 [2024-11-20 14:45:16.618451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.349 [2024-11-20 14:45:16.618587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.349 [2024-11-20 14:45:16.618928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.349 [2024-11-20 14:45:16.618933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.618940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.618946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.618952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.618957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.618964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.618970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.618977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.618988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.618995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.619001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.619013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.619025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.619037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.619049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.619061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.350 [2024-11-20 14:45:16.619073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.350 [2024-11-20 14:45:16.619095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.350 [2024-11-20 14:45:16.619100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4056 len:8 PRP1 0x0 PRP2 0x0 00:25:16.350 [2024-11-20 14:45:16.619105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619140] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:16.350 [2024-11-20 14:45:16.619157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.350 [2024-11-20 14:45:16.619163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.350 [2024-11-20 14:45:16.619175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.350 [2024-11-20 14:45:16.619186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.350 [2024-11-20 14:45:16.619200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.350 [2024-11-20 14:45:16.619206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:16.350 [2024-11-20 14:45:16.621681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:16.350 [2024-11-20 14:45:16.621703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1396d80 (9): Bad file descriptor 00:25:16.350 [2024-11-20 14:45:16.764696] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:16.350 12300.11 IOPS, 48.05 MiB/s [2024-11-20T13:45:23.410Z] 12373.60 IOPS, 48.33 MiB/s [2024-11-20T13:45:23.410Z] 12430.36 IOPS, 48.56 MiB/s [2024-11-20T13:45:23.410Z] 12467.17 IOPS, 48.70 MiB/s [2024-11-20T13:45:23.410Z] 12502.15 IOPS, 48.84 MiB/s [2024-11-20T13:45:23.410Z] 12546.21 IOPS, 49.01 MiB/s 00:25:16.350 Latency(us) 00:25:16.350 [2024-11-20T13:45:23.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.350 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:16.350 Verification LBA range: start 0x0 length 0x4000 00:25:16.350 NVMe0n1 : 15.01 12572.83 49.11 891.19 0.00 9487.00 539.31 12561.07 00:25:16.350 [2024-11-20T13:45:23.410Z] =================================================================================================================== 00:25:16.350 [2024-11-20T13:45:23.410Z] Total : 12572.83 49.11 891.19 0.00 9487.00 539.31 12561.07 00:25:16.350 Received shutdown signal, test time was about 15.000000 seconds 00:25:16.350 00:25:16.350 Latency(us) 00:25:16.350 [2024-11-20T13:45:23.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.350 [2024-11-20T13:45:23.410Z] =================================================================================================================== 00:25:16.350 [2024-11-20T13:45:23.410Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4025747 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4025747 /var/tmp/bdevperf.sock 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4025747 ']' 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.350 [2024-11-20 14:45:23.360350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.350 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.611 [2024-11-20 14:45:23.520731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.611 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:16.870 NVMe0n1 00:25:16.870 14:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.440 00:25:17.440 14:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.700 00:25:17.700 14:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.700 14:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:17.700 14:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.961 14:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:21.253 14:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:21.253 14:45:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.253 14:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4026791 00:25:21.253 14:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4026791 00:25:21.253 14:45:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:22.272 { 00:25:22.272 "results": [ 00:25:22.272 { 00:25:22.272 "job": "NVMe0n1", 00:25:22.272 "core_mask": "0x1", 00:25:22.272 "workload": "verify", 00:25:22.272 "status": "finished", 00:25:22.272 "verify_range": { 00:25:22.272 "start": 0, 00:25:22.272 "length": 16384 00:25:22.272 }, 00:25:22.272 "queue_depth": 128, 00:25:22.272 "io_size": 4096, 00:25:22.272 "runtime": 1.005155, 00:25:22.272 "iops": 12991.031233988788, 00:25:22.272 "mibps": 50.7462157577687, 00:25:22.272 "io_failed": 0, 00:25:22.272 "io_timeout": 0, 00:25:22.272 "avg_latency_us": 9817.883747383468, 00:25:22.272 "min_latency_us": 1870.5066666666667, 00:25:22.272 "max_latency_us": 9611.946666666667 00:25:22.272 } 00:25:22.272 ], 00:25:22.272 "core_count": 1 00:25:22.272 } 00:25:22.272 14:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:22.272 [2024-11-20 14:45:23.054372] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:25:22.272 [2024-11-20 14:45:23.054431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025747 ] 00:25:22.272 [2024-11-20 14:45:23.120453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.272 [2024-11-20 14:45:23.150305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.272 [2024-11-20 14:45:24.820368] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:22.272 [2024-11-20 14:45:24.820407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.272 [2024-11-20 14:45:24.820416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.272 [2024-11-20 14:45:24.820423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.272 [2024-11-20 14:45:24.820429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.272 [2024-11-20 14:45:24.820435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.272 [2024-11-20 14:45:24.820440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.272 [2024-11-20 14:45:24.820446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.272 [2024-11-20 14:45:24.820451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.272 [2024-11-20 14:45:24.820456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:22.272 [2024-11-20 14:45:24.820477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:22.272 [2024-11-20 14:45:24.820488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7ed80 (9): Bad file descriptor 00:25:22.272 [2024-11-20 14:45:24.954336] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:22.272 Running I/O for 1 seconds... 00:25:22.272 12930.00 IOPS, 50.51 MiB/s 00:25:22.272 Latency(us) 00:25:22.272 [2024-11-20T13:45:29.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.272 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:22.272 Verification LBA range: start 0x0 length 0x4000 00:25:22.272 NVMe0n1 : 1.01 12991.03 50.75 0.00 0.00 9817.88 1870.51 9611.95 00:25:22.272 [2024-11-20T13:45:29.332Z] =================================================================================================================== 00:25:22.272 [2024-11-20T13:45:29.332Z] Total : 12991.03 50.75 0.00 0.00 9817.88 1870.51 9611.95 00:25:22.272 14:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.272 14:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:22.272 14:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.530 14:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.530 14:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:22.530 14:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.789 14:45:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4025747 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4025747 ']' 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4025747 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4025747 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4025747' 00:25:26.085 killing process with pid 4025747 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4025747 00:25:26.085 14:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4025747 00:25:26.085 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:26.085 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.346 rmmod nvme_tcp 00:25:26.346 rmmod nvme_fabrics 00:25:26.346 rmmod nvme_keyring 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 4021305 ']' 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 4021305 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4021305 ']' 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4021305 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4021305 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4021305' 00:25:26.346 killing process with pid 4021305 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4021305 00:25:26.346 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4021305 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.605 14:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.513 14:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:28.513 00:25:28.513 real 0m35.996s 00:25:28.513 user 1m55.403s 00:25:28.513 sys 0m6.636s 00:25:28.513 14:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.513 14:45:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:28.513 ************************************ 00:25:28.513 END TEST nvmf_failover 00:25:28.513 ************************************ 00:25:28.513 14:45:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:28.513 14:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:28.513 14:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.513 14:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.513 ************************************ 00:25:28.513 START TEST nvmf_host_discovery 00:25:28.513 ************************************ 00:25:28.513 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:28.513 * Looking for test storage... 00:25:28.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.513 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:28.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.773 --rc genhtml_branch_coverage=1 00:25:28.773 --rc genhtml_function_coverage=1 00:25:28.773 --rc genhtml_legend=1 00:25:28.773 --rc geninfo_all_blocks=1 00:25:28.773 --rc geninfo_unexecuted_blocks=1 00:25:28.773 00:25:28.773 ' 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:28.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.773 --rc genhtml_branch_coverage=1 00:25:28.773 --rc genhtml_function_coverage=1 00:25:28.773 --rc genhtml_legend=1 00:25:28.773 --rc geninfo_all_blocks=1 00:25:28.773 --rc geninfo_unexecuted_blocks=1 00:25:28.773 00:25:28.773 ' 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:28.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.773 --rc genhtml_branch_coverage=1 00:25:28.773 --rc genhtml_function_coverage=1 00:25:28.773 --rc genhtml_legend=1 00:25:28.773 --rc geninfo_all_blocks=1 00:25:28.773 --rc geninfo_unexecuted_blocks=1 00:25:28.773 00:25:28.773 ' 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:28.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.773 --rc genhtml_branch_coverage=1 00:25:28.773 --rc genhtml_function_coverage=1 00:25:28.773 --rc genhtml_legend=1 00:25:28.773 --rc geninfo_all_blocks=1 00:25:28.773 --rc geninfo_unexecuted_blocks=1 00:25:28.773 00:25:28.773 ' 00:25:28.773 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:28.774 14:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:34.056 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:34.056 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:34.056 Found net devices under 0000:31:00.0: cvl_0_0 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:34.056 Found net devices under 0000:31:00.1: cvl_0_1 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:34.056 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:34.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:25:34.057 00:25:34.057 --- 10.0.0.2 ping statistics --- 00:25:34.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.057 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:25:34.057 00:25:34.057 --- 10.0.0.1 ping statistics --- 00:25:34.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.057 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4032144 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4032144 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4032144 ']' 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.057 14:45:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:34.057 [2024-11-20 14:45:40.981924] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:25:34.057 [2024-11-20 14:45:40.981974] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.057 [2024-11-20 14:45:41.054696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.057 [2024-11-20 14:45:41.083308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.057 [2024-11-20 14:45:41.083338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.057 [2024-11-20 14:45:41.083344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.057 [2024-11-20 14:45:41.083349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.057 [2024-11-20 14:45:41.083353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.057 [2024-11-20 14:45:41.083827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.317 [2024-11-20 14:45:41.186944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.317 [2024-11-20 14:45:41.195132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.317 null0 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.317 null1 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4032326 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4032326 /tmp/host.sock 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4032326 ']' 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:34.317 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.317 14:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:34.317 [2024-11-20 14:45:41.258798] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:25:34.317 [2024-11-20 14:45:41.258846] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032326 ] 00:25:34.317 [2024-11-20 14:45:41.336083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.317 [2024-11-20 14:45:41.372382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:35.256 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.257 [2024-11-20 14:45:42.265787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.257 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:35.517 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:35.518 14:45:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:36.087 [2024-11-20 14:45:43.065760] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:36.087 [2024-11-20 14:45:43.065781] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:36.087 [2024-11-20 14:45:43.065794] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:36.348 [2024-11-20 14:45:43.153070] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:36.348 [2024-11-20 14:45:43.337351] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:36.348 [2024-11-20 14:45:43.338325] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2286670:1 started. 00:25:36.348 [2024-11-20 14:45:43.339918] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:36.348 [2024-11-20 14:45:43.339935] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:36.348 [2024-11-20 14:45:43.385007] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2286670 was disconnected and freed. delete nvme_qpair. 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.608 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.609 [2024-11-20 14:45:43.541226] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2255140:1 started. 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:36.609 [2024-11-20 14:45:43.544676] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2255140 was disconnected and freed. delete nvme_qpair. 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.609 [2024-11-20 14:45:43.609175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:36.609 [2024-11-20 14:45:43.609264] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:36.609 [2024-11-20 14:45:43.609284] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.609 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.610 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.869 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.869 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:36.869 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.869 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:36.870 14:45:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:36.870 [2024-11-20 14:45:43.737740] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:36.870 [2024-11-20 14:45:43.839629] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:36.870 [2024-11-20 14:45:43.839664] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:36.870 [2024-11-20 14:45:43.839672] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:36.870 [2024-11-20 14:45:43.839676] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:37.807 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.808 [2024-11-20 14:45:44.780861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.808 [2024-11-20 14:45:44.780879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.808 [2024-11-20 14:45:44.780886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.808 [2024-11-20 14:45:44.780892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.808 [2024-11-20 14:45:44.780897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.808 [2024-11-20 14:45:44.780902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.808 [2024-11-20 14:45:44.780908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.808 [2024-11-20 14:45:44.780914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.808 [2024-11-20 14:45:44.780919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2256d90 is same with the state(6) to be set 00:25:37.808 [2024-11-20 14:45:44.780963] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:37.808 [2024-11-20 14:45:44.780973] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:37.808 [2024-11-20 14:45:44.790874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256d90 (9): Bad file descriptor 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.808 [2024-11-20 14:45:44.800906] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:37.808 [2024-11-20 14:45:44.800915] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:37.808 [2024-11-20 14:45:44.800919] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:37.808 [2024-11-20 14:45:44.800925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:37.808 [2024-11-20 14:45:44.800938] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:37.808 [2024-11-20 14:45:44.801117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.808 [2024-11-20 14:45:44.801128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2256d90 with addr=10.0.0.2, port=4420 00:25:37.808 [2024-11-20 14:45:44.801134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2256d90 is same with the state(6) to be set 00:25:37.808 [2024-11-20 14:45:44.801143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256d90 (9): Bad file descriptor 00:25:37.808 [2024-11-20 14:45:44.801156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:37.808 [2024-11-20 14:45:44.801163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:37.808 [2024-11-20 14:45:44.801168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:37.808 [2024-11-20 14:45:44.801174] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:37.808 [2024-11-20 14:45:44.801178] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:37.808 [2024-11-20 14:45:44.801181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:37.808 [2024-11-20 14:45:44.810966] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:37.808 [2024-11-20 14:45:44.810974] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:37.808 [2024-11-20 14:45:44.810977] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:37.808 [2024-11-20 14:45:44.810981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:37.808 [2024-11-20 14:45:44.810991] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:37.808 [2024-11-20 14:45:44.811445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.808 [2024-11-20 14:45:44.811476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2256d90 with addr=10.0.0.2, port=4420 00:25:37.808 [2024-11-20 14:45:44.811485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2256d90 is same with the state(6) to be set 00:25:37.808 [2024-11-20 14:45:44.811499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256d90 (9): Bad file descriptor 00:25:37.808 [2024-11-20 14:45:44.811519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:37.808 [2024-11-20 14:45:44.811525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:37.808 [2024-11-20 14:45:44.811531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:37.808 [2024-11-20 14:45:44.811536] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:37.808 [2024-11-20 14:45:44.811540] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:37.808 [2024-11-20 14:45:44.811543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:37.808 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:37.808 [2024-11-20 14:45:44.821022] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:37.808 [2024-11-20 14:45:44.821033] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:37.808 [2024-11-20 14:45:44.821036] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:37.808 [2024-11-20 14:45:44.821039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:37.808 [2024-11-20 14:45:44.821052] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:37.808 [2024-11-20 14:45:44.821243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.808 [2024-11-20 14:45:44.821261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2256d90 with addr=10.0.0.2, port=4420 00:25:37.808 [2024-11-20 14:45:44.821268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2256d90 is same with the state(6) to be set 00:25:37.808 [2024-11-20 14:45:44.821281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256d90 (9): Bad file descriptor 00:25:37.808 [2024-11-20 14:45:44.821289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:37.809 [2024-11-20 14:45:44.821294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:37.809 [2024-11-20 14:45:44.821299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:37.809 [2024-11-20 14:45:44.821304] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:37.809 [2024-11-20 14:45:44.821307] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:37.809 [2024-11-20 14:45:44.821310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.809 [2024-11-20 14:45:44.831082] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:37.809 [2024-11-20 14:45:44.831095] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:37.809 [2024-11-20 14:45:44.831098] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:37.809 [2024-11-20 14:45:44.831102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:37.809 [2024-11-20 14:45:44.831114] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:37.809 [2024-11-20 14:45:44.831560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.809 [2024-11-20 14:45:44.831594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2256d90 with addr=10.0.0.2, port=4420 00:25:37.809 [2024-11-20 14:45:44.831603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2256d90 is same with the state(6) to be set 00:25:37.809 [2024-11-20 14:45:44.831617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256d90 (9): Bad file descriptor 00:25:37.809 [2024-11-20 14:45:44.831637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:37.809 [2024-11-20 14:45:44.831643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:37.809 [2024-11-20 14:45:44.831649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:37.809 [2024-11-20 14:45:44.831654] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:37.809 [2024-11-20 14:45:44.831658] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:37.809 [2024-11-20 14:45:44.831661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:37.809 [2024-11-20 14:45:44.841144] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:37.809 [2024-11-20 14:45:44.841154] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:37.809 [2024-11-20 14:45:44.841157] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:37.809 [2024-11-20 14:45:44.841161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:37.809 [2024-11-20 14:45:44.841172] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:37.809 [2024-11-20 14:45:44.841598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.809 [2024-11-20 14:45:44.841629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2256d90 with addr=10.0.0.2, port=4420 00:25:37.809 [2024-11-20 14:45:44.841638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2256d90 is same with the state(6) to be set 00:25:37.809 [2024-11-20 14:45:44.841652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256d90 (9): Bad file descriptor 00:25:37.809 [2024-11-20 14:45:44.841671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:37.809 [2024-11-20 14:45:44.841677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:37.809 [2024-11-20 14:45:44.841683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:37.809 [2024-11-20 14:45:44.841688] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:37.809 [2024-11-20 14:45:44.841692] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:37.809 [2024-11-20 14:45:44.841695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:37.809 [2024-11-20 14:45:44.851204] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:37.809 [2024-11-20 14:45:44.851214] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:37.809 [2024-11-20 14:45:44.851217] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:37.809 [2024-11-20 14:45:44.851221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:37.809 [2024-11-20 14:45:44.851233] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:37.809 [2024-11-20 14:45:44.851540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.809 [2024-11-20 14:45:44.851551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2256d90 with addr=10.0.0.2, port=4420 00:25:37.809 [2024-11-20 14:45:44.851556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2256d90 is same with the state(6) to be set 00:25:37.809 [2024-11-20 14:45:44.851564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256d90 (9): Bad file descriptor 00:25:37.809 [2024-11-20 14:45:44.851571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:37.809 [2024-11-20 14:45:44.851576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:37.809 [2024-11-20 14:45:44.851581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:37.809 [2024-11-20 14:45:44.851586] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:37.809 [2024-11-20 14:45:44.851589] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:37.809 [2024-11-20 14:45:44.851592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.809 [2024-11-20 14:45:44.861261] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:37.809 [2024-11-20 14:45:44.861271] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:37.809 [2024-11-20 14:45:44.861274] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:37.809 [2024-11-20 14:45:44.861278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:37.809 [2024-11-20 14:45:44.861289] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:37.809 [2024-11-20 14:45:44.861620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.809 [2024-11-20 14:45:44.861629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2256d90 with addr=10.0.0.2, port=4420 00:25:37.809 [2024-11-20 14:45:44.861638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2256d90 is same with the state(6) to be set 00:25:37.809 [2024-11-20 14:45:44.861646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256d90 (9): Bad file descriptor 00:25:37.809 [2024-11-20 14:45:44.861654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:37.809 [2024-11-20 14:45:44.861658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:37.809 [2024-11-20 14:45:44.861663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:37.809 [2024-11-20 14:45:44.861667] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:37.809 [2024-11-20 14:45:44.861671] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:37.809 [2024-11-20 14:45:44.861674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:37.809 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.069 [2024-11-20 14:45:44.868743] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:38.069 [2024-11-20 14:45:44.868757] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:38.069 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:38.069 14:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.007 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.008 14:45:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.008 14:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.433 [2024-11-20 14:45:47.117153] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:40.433 [2024-11-20 14:45:47.117167] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:40.433 [2024-11-20 14:45:47.117176] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:40.433 [2024-11-20 14:45:47.206432] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:40.433 [2024-11-20 14:45:47.267115] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:40.433 [2024-11-20 14:45:47.267744] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x22544d0:1 started. 00:25:40.433 [2024-11-20 14:45:47.269077] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:40.433 [2024-11-20 14:45:47.269098] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:40.433 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.433 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.433 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:40.433 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.433 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.433 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.433 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.433 [2024-11-20 14:45:47.274010] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x22544d0 was disconnected and freed. delete nvme_qpair. 00:25:40.433 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.434 request: 00:25:40.434 { 00:25:40.434 "name": "nvme", 00:25:40.434 "trtype": "tcp", 00:25:40.434 "traddr": "10.0.0.2", 00:25:40.434 "adrfam": "ipv4", 00:25:40.434 "trsvcid": "8009", 00:25:40.434 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:40.434 "wait_for_attach": true, 00:25:40.434 "method": "bdev_nvme_start_discovery", 00:25:40.434 "req_id": 1 00:25:40.434 } 00:25:40.434 Got JSON-RPC error response 00:25:40.434 response: 00:25:40.434 { 00:25:40.434 "code": -17, 00:25:40.434 "message": "File exists" 00:25:40.434 } 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.434 request: 00:25:40.434 { 00:25:40.434 "name": "nvme_second", 00:25:40.434 "trtype": "tcp", 00:25:40.434 "traddr": "10.0.0.2", 00:25:40.434 "adrfam": "ipv4", 00:25:40.434 "trsvcid": "8009", 00:25:40.434 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:40.434 "wait_for_attach": true, 00:25:40.434 "method": "bdev_nvme_start_discovery", 00:25:40.434 "req_id": 1 00:25:40.434 } 00:25:40.434 Got JSON-RPC error response 00:25:40.434 response: 00:25:40.434 { 00:25:40.434 "code": -17, 00:25:40.434 "message": "File exists" 00:25:40.434 } 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.434 14:45:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.374 [2024-11-20 14:45:48.428265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-11-20 14:45:48.428289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23beab0 with addr=10.0.0.2, port=8010 00:25:41.374 [2024-11-20 14:45:48.428299] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:41.374 [2024-11-20 14:45:48.428304] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:41.374 [2024-11-20 14:45:48.428309] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:42.754 [2024-11-20 14:45:49.430612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.754 [2024-11-20 14:45:49.430632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23beab0 with addr=10.0.0.2, port=8010 00:25:42.754 [2024-11-20 14:45:49.430641] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:42.754 [2024-11-20 14:45:49.430646] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:42.754 [2024-11-20 14:45:49.430651] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:43.410 [2024-11-20 14:45:50.432607] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:43.410 request: 00:25:43.410 { 00:25:43.410 "name": "nvme_second", 00:25:43.410 "trtype": "tcp", 00:25:43.410 "traddr": "10.0.0.2", 00:25:43.410 "adrfam": "ipv4", 00:25:43.410 "trsvcid": "8010", 00:25:43.410 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:43.410 "wait_for_attach": false, 00:25:43.410 "attach_timeout_ms": 3000, 00:25:43.410 "method": "bdev_nvme_start_discovery", 00:25:43.410 "req_id": 1 00:25:43.410 } 00:25:43.410 Got JSON-RPC error response 00:25:43.410 response: 00:25:43.410 { 00:25:43.410 "code": -110, 00:25:43.410 "message": "Connection timed out" 00:25:43.410 } 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:43.410 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4032326 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.725 rmmod nvme_tcp 00:25:43.725 rmmod nvme_fabrics 00:25:43.725 rmmod nvme_keyring 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4032144 ']' 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4032144 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 4032144 ']' 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 4032144 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4032144 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4032144' 00:25:43.725 killing process with pid 4032144 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 4032144 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 4032144 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.725 14:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.264 00:25:46.264 real 0m17.230s 00:25:46.264 user 0m21.312s 00:25:46.264 sys 0m5.126s 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.264 ************************************ 00:25:46.264 END TEST nvmf_host_discovery 00:25:46.264 ************************************ 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.264 ************************************ 00:25:46.264 START TEST nvmf_host_multipath_status 00:25:46.264 ************************************ 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:46.264 * Looking for test storage... 00:25:46.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.264 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.265 --rc genhtml_branch_coverage=1 00:25:46.265 --rc genhtml_function_coverage=1 00:25:46.265 --rc genhtml_legend=1 00:25:46.265 --rc geninfo_all_blocks=1 00:25:46.265 --rc geninfo_unexecuted_blocks=1 00:25:46.265 00:25:46.265 ' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.265 --rc genhtml_branch_coverage=1 00:25:46.265 --rc genhtml_function_coverage=1 00:25:46.265 --rc genhtml_legend=1 00:25:46.265 --rc geninfo_all_blocks=1 00:25:46.265 --rc geninfo_unexecuted_blocks=1 00:25:46.265 00:25:46.265 ' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.265 --rc genhtml_branch_coverage=1 00:25:46.265 --rc genhtml_function_coverage=1 00:25:46.265 --rc genhtml_legend=1 00:25:46.265 --rc geninfo_all_blocks=1 00:25:46.265 --rc geninfo_unexecuted_blocks=1 00:25:46.265 00:25:46.265 ' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.265 --rc genhtml_branch_coverage=1 00:25:46.265 --rc genhtml_function_coverage=1 00:25:46.265 --rc genhtml_legend=1 00:25:46.265 --rc geninfo_all_blocks=1 00:25:46.265 --rc geninfo_unexecuted_blocks=1 00:25:46.265 00:25:46.265 ' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:46.265 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:46.266 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.266 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.266 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.266 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:46.266 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:46.266 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.266 14:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:51.545 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:51.545 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:51.545 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:51.546 Found net devices under 0000:31:00.0: cvl_0_0 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:51.546 Found net devices under 0000:31:00.1: cvl_0_1 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:51.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:25:51.546 00:25:51.546 --- 10.0.0.2 ping statistics --- 00:25:51.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.546 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:25:51.546 00:25:51.546 --- 10.0.0.1 ping statistics --- 00:25:51.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.546 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=4038986 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 4038986 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4038986 ']' 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.546 14:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:51.546 [2024-11-20 14:45:58.388815] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:25:51.546 [2024-11-20 14:45:58.388864] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.546 [2024-11-20 14:45:58.474228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:51.546 [2024-11-20 14:45:58.510020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.546 [2024-11-20 14:45:58.510054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.546 [2024-11-20 14:45:58.510062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.546 [2024-11-20 14:45:58.510069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.546 [2024-11-20 14:45:58.510075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.546 [2024-11-20 14:45:58.511236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.546 [2024-11-20 14:45:58.511241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.115 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.115 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:52.115 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:52.115 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:52.115 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.373 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.373 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4038986 00:25:52.373 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:52.373 [2024-11-20 14:45:59.325884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.373 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:52.632 Malloc0 00:25:52.632 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:52.632 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:52.891 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.150 [2024-11-20 14:45:59.979852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.150 14:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:53.150 [2024-11-20 14:46:00.136310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4039371 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4039371 /var/tmp/bdevperf.sock 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4039371 ']' 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:53.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.150 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:54.090 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.090 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:54.090 14:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:54.090 14:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:54.659 Nvme0n1 00:25:54.659 14:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:54.918 Nvme0n1 00:25:54.918 14:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:54.918 14:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:57.454 14:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:57.454 14:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:57.454 14:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:57.454 14:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:58.391 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:58.391 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:58.391 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.391 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.391 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.391 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:58.391 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.391 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.650 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.650 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.651 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.651 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.651 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.651 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.910 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.910 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.910 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.910 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.910 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.910 14:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.170 14:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.170 14:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:59.170 14:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.170 14:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.170 14:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.170 14:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:59.170 14:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.430 14:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:59.689 14:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:00.625 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:00.625 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:00.625 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.625 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.885 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.885 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:00.885 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.885 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.885 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.885 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.885 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.885 14:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.144 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.144 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.144 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.144 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.144 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.144 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.144 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.144 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.404 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.404 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.404 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.404 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.663 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.663 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:01.663 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.663 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:01.923 14:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:02.860 14:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:02.860 14:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.860 14:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.860 14:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.119 14:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.119 14:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:03.119 14:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.119 14:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.119 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.119 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.119 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.119 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.378 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.378 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.378 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.378 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.637 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.637 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.637 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.637 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.637 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.637 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:03.637 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.637 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.896 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.896 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:03.896 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:04.156 14:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:04.156 14:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:05.093 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:05.093 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:05.093 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.353 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.353 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.353 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.353 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.353 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.613 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.613 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.613 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.613 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.613 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.613 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.613 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.613 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.872 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.872 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.872 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.872 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.131 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.131 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:06.131 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.131 14:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.131 14:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.131 14:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:06.131 14:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:06.392 14:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:06.392 14:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.769 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.028 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.028 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.028 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.028 14:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.286 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.286 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:08.286 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.286 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.286 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.286 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:08.286 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.286 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.545 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.545 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:08.545 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:08.545 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:08.804 14:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:09.739 14:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:09.739 14:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:09.739 14:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.739 14:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.998 14:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.998 14:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:09.998 14:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.998 14:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.998 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.998 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.998 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.998 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.257 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.257 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.257 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.257 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.515 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.515 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:10.515 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.515 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.515 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.515 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:10.515 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.515 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.774 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.774 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:11.033 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:11.033 14:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:11.033 14:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:11.291 14:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:12.226 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:12.226 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.226 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.226 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.485 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.485 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:12.485 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.485 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.485 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.485 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.485 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.485 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:12.744 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.744 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:12.744 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.744 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.003 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.003 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.003 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.003 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.003 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.003 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:13.003 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.003 14:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.261 14:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.261 14:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:13.261 14:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:13.261 14:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:13.520 14:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:14.456 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:14.456 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:14.456 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.456 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.715 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.715 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:14.715 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.715 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.974 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.974 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.974 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.974 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.974 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.974 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.974 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.974 14:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.233 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.233 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:15.233 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.233 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.233 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.233 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:15.233 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.233 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.492 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.492 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:15.492 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:15.751 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:15.751 14:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:17.127 14:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:17.127 14:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:17.127 14:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.127 14:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.127 14:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.127 14:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.127 14:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.127 14:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.127 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.127 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.127 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.127 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.386 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.386 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.386 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.386 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.386 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.386 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:17.386 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.386 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.645 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.645 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:17.645 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.645 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.904 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.904 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:17.904 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:17.904 14:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:18.163 14:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:19.100 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:19.100 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.100 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.100 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.359 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.359 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:19.359 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.359 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.359 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.359 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.359 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.359 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.618 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.618 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.618 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.618 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.877 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.877 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.877 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.877 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.877 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.877 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:19.877 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.877 14:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4039371 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4039371 ']' 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4039371 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4039371 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4039371' 00:26:20.135 killing process with pid 4039371 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4039371 00:26:20.135 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4039371 00:26:20.135 { 00:26:20.135 "results": [ 00:26:20.135 { 00:26:20.136 "job": "Nvme0n1", 00:26:20.136 "core_mask": "0x4", 00:26:20.136 "workload": "verify", 00:26:20.136 "status": "terminated", 00:26:20.136 "verify_range": { 00:26:20.136 "start": 0, 00:26:20.136 "length": 16384 00:26:20.136 }, 00:26:20.136 "queue_depth": 128, 00:26:20.136 "io_size": 4096, 00:26:20.136 "runtime": 25.088635, 00:26:20.136 "iops": 11963.624166878748, 00:26:20.136 "mibps": 46.73290690187011, 00:26:20.136 "io_failed": 0, 00:26:20.136 "io_timeout": 0, 00:26:20.136 "avg_latency_us": 10680.878618028926, 00:26:20.136 "min_latency_us": 464.2133333333333, 00:26:20.136 "max_latency_us": 3019898.88 00:26:20.136 } 00:26:20.136 ], 00:26:20.136 "core_count": 1 00:26:20.136 } 00:26:20.398 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4039371 00:26:20.398 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:20.398 [2024-11-20 14:46:00.187376] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:26:20.398 [2024-11-20 14:46:00.187435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039371 ] 00:26:20.398 [2024-11-20 14:46:00.265814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.398 [2024-11-20 14:46:00.300836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.398 Running I/O for 90 seconds... 00:26:20.398 11158.00 IOPS, 43.59 MiB/s [2024-11-20T13:46:27.458Z] 12035.00 IOPS, 47.01 MiB/s [2024-11-20T13:46:27.458Z] 12327.67 IOPS, 48.15 MiB/s [2024-11-20T13:46:27.458Z] 12472.75 IOPS, 48.72 MiB/s [2024-11-20T13:46:27.458Z] 12543.80 IOPS, 49.00 MiB/s [2024-11-20T13:46:27.458Z] 12588.00 IOPS, 49.17 MiB/s [2024-11-20T13:46:27.458Z] 12617.00 IOPS, 49.29 MiB/s [2024-11-20T13:46:27.458Z] 12645.50 IOPS, 49.40 MiB/s [2024-11-20T13:46:27.458Z] 12667.11 IOPS, 49.48 MiB/s [2024-11-20T13:46:27.458Z] 12676.50 IOPS, 49.52 MiB/s [2024-11-20T13:46:27.458Z] 12690.00 IOPS, 49.57 MiB/s [2024-11-20T13:46:27.458Z] [2024-11-20 14:46:13.272766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.398 [2024-11-20 14:46:13.272801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.398 [2024-11-20 14:46:13.272831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.398 [2024-11-20 14:46:13.272838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:20.398 [2024-11-20 14:46:13.272849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.398 [2024-11-20 14:46:13.272855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:20.398 [2024-11-20 14:46:13.272865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.398 [2024-11-20 14:46:13.272871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:20.398 [2024-11-20 14:46:13.272881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.398 [2024-11-20 14:46:13.272886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.272896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.272901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.272911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.272917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.272927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.272933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.272943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.399 [2024-11-20 14:46:13.272949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.272959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.272969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.272980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.272986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.272996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.399 [2024-11-20 14:46:13.273880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:20.399 [2024-11-20 14:46:13.273892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.273897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.273909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.273915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.273927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.273932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.273944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.273950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.273963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.273968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.273980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.273985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.273997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.400 [2024-11-20 14:46:13.274522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.400 [2024-11-20 14:46:13.274541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.400 [2024-11-20 14:46:13.274560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.400 [2024-11-20 14:46:13.274578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:20.400 [2024-11-20 14:46:13.274590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.400 [2024-11-20 14:46:13.274597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.401 [2024-11-20 14:46:13.274615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.401 [2024-11-20 14:46:13.274635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.401 [2024-11-20 14:46:13.274735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.274983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.274998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:20.401 [2024-11-20 14:46:13.275540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.401 [2024-11-20 14:46:13.275547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:13.275563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.402 [2024-11-20 14:46:13.275568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:13.275585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.402 [2024-11-20 14:46:13.275590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:13.275607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.402 [2024-11-20 14:46:13.275613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:13.275629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.402 [2024-11-20 14:46:13.275635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:13.275651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.402 [2024-11-20 14:46:13.275657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:13.275673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.402 [2024-11-20 14:46:13.275678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:20.402 11935.25 IOPS, 46.62 MiB/s [2024-11-20T13:46:27.462Z] 11017.15 IOPS, 43.04 MiB/s [2024-11-20T13:46:27.462Z] 10230.21 IOPS, 39.96 MiB/s [2024-11-20T13:46:27.462Z] 10162.87 IOPS, 39.70 MiB/s [2024-11-20T13:46:27.462Z] 10332.69 IOPS, 40.36 MiB/s [2024-11-20T13:46:27.462Z] 10716.29 IOPS, 41.86 MiB/s [2024-11-20T13:46:27.462Z] 11049.33 IOPS, 43.16 MiB/s [2024-11-20T13:46:27.462Z] 11224.63 IOPS, 43.85 MiB/s [2024-11-20T13:46:27.462Z] 11305.50 IOPS, 44.16 MiB/s [2024-11-20T13:46:27.462Z] 11417.76 IOPS, 44.60 MiB/s [2024-11-20T13:46:27.462Z] 11665.32 IOPS, 45.57 MiB/s [2024-11-20T13:46:27.462Z] 11884.91 IOPS, 46.43 MiB/s [2024-11-20T13:46:27.462Z] [2024-11-20 14:46:25.050566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.050875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.050997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.051183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.051189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:20.402 [2024-11-20 14:46:25.052205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.402 [2024-11-20 14:46:25.052220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:20.402 11928.71 IOPS, 46.60 MiB/s [2024-11-20T13:46:27.462Z] 11964.32 IOPS, 46.74 MiB/s [2024-11-20T13:46:27.462Z] Received shutdown signal, test time was about 25.089280 seconds 00:26:20.402 00:26:20.402 Latency(us) 00:26:20.402 [2024-11-20T13:46:27.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.403 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:20.403 Verification LBA range: start 0x0 length 0x4000 00:26:20.403 Nvme0n1 : 25.09 11963.62 46.73 0.00 0.00 10680.88 464.21 3019898.88 00:26:20.403 [2024-11-20T13:46:27.463Z] =================================================================================================================== 00:26:20.403 [2024-11-20T13:46:27.463Z] Total : 11963.62 46.73 0.00 0.00 10680.88 464.21 3019898.88 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:20.403 rmmod nvme_tcp 00:26:20.403 rmmod nvme_fabrics 00:26:20.403 rmmod nvme_keyring 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 4038986 ']' 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 4038986 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4038986 ']' 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4038986 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.403 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4038986 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4038986' 00:26:20.662 killing process with pid 4038986 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4038986 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4038986 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.662 14:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:23.196 00:26:23.196 real 0m36.838s 00:26:23.196 user 1m37.695s 00:26:23.196 sys 0m9.119s 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:23.196 ************************************ 00:26:23.196 END TEST nvmf_host_multipath_status 00:26:23.196 ************************************ 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.196 ************************************ 00:26:23.196 START TEST nvmf_discovery_remove_ifc 00:26:23.196 ************************************ 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:23.196 * Looking for test storage... 00:26:23.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:23.196 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:23.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.197 --rc genhtml_branch_coverage=1 00:26:23.197 --rc genhtml_function_coverage=1 00:26:23.197 --rc genhtml_legend=1 00:26:23.197 --rc geninfo_all_blocks=1 00:26:23.197 --rc geninfo_unexecuted_blocks=1 00:26:23.197 00:26:23.197 ' 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:23.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.197 --rc genhtml_branch_coverage=1 00:26:23.197 --rc genhtml_function_coverage=1 00:26:23.197 --rc genhtml_legend=1 00:26:23.197 --rc geninfo_all_blocks=1 00:26:23.197 --rc geninfo_unexecuted_blocks=1 00:26:23.197 00:26:23.197 ' 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:23.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.197 --rc genhtml_branch_coverage=1 00:26:23.197 --rc genhtml_function_coverage=1 00:26:23.197 --rc genhtml_legend=1 00:26:23.197 --rc geninfo_all_blocks=1 00:26:23.197 --rc geninfo_unexecuted_blocks=1 00:26:23.197 00:26:23.197 ' 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:23.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.197 --rc genhtml_branch_coverage=1 00:26:23.197 --rc genhtml_function_coverage=1 00:26:23.197 --rc genhtml_legend=1 00:26:23.197 --rc geninfo_all_blocks=1 00:26:23.197 --rc geninfo_unexecuted_blocks=1 00:26:23.197 00:26:23.197 ' 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.197 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:23.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:23.198 14:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.470 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:28.470 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:28.471 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:28.471 Found net devices under 0000:31:00.0: cvl_0_0 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:28.471 Found net devices under 0000:31:00.1: cvl_0_1 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:28.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:26:28.471 00:26:28.471 --- 10.0.0.2 ping statistics --- 00:26:28.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.471 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:26:28.471 00:26:28.471 --- 10.0.0.1 ping statistics --- 00:26:28.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.471 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=4049884 00:26:28.471 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 4049884 00:26:28.472 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4049884 ']' 00:26:28.472 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.472 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.472 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.472 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.472 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.472 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:28.731 [2024-11-20 14:46:35.559712] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:26:28.731 [2024-11-20 14:46:35.559763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.731 [2024-11-20 14:46:35.630112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.731 [2024-11-20 14:46:35.658466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.731 [2024-11-20 14:46:35.658492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.731 [2024-11-20 14:46:35.658498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.731 [2024-11-20 14:46:35.658503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.731 [2024-11-20 14:46:35.658507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.731 [2024-11-20 14:46:35.658986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.731 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.731 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:28.731 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.731 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:28.731 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.732 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.732 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:28.732 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.732 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.732 [2024-11-20 14:46:35.769768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.732 [2024-11-20 14:46:35.777940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:28.732 null0 00:26:28.991 [2024-11-20 14:46:35.809940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4049910 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4049910 /tmp/host.sock 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4049910 ']' 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:28.991 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.991 14:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:28.991 [2024-11-20 14:46:35.867777] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:26:28.991 [2024-11-20 14:46:35.867827] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049910 ] 00:26:28.991 [2024-11-20 14:46:35.946383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.991 [2024-11-20 14:46:35.982580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.928 14:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.922 [2024-11-20 14:46:37.768168] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:30.922 [2024-11-20 14:46:37.768191] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:30.922 [2024-11-20 14:46:37.768205] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:30.922 [2024-11-20 14:46:37.896614] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:31.251 [2024-11-20 14:46:38.080750] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:31.251 [2024-11-20 14:46:38.081823] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2229690:1 started. 00:26:31.251 [2024-11-20 14:46:38.083391] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:31.251 [2024-11-20 14:46:38.083437] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:31.251 [2024-11-20 14:46:38.083461] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:31.251 [2024-11-20 14:46:38.083475] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:31.251 [2024-11-20 14:46:38.083496] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.251 [2024-11-20 14:46:38.087513] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2229690 was disconnected and freed. delete nvme_qpair. 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:31.251 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:31.252 14:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.648 14:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.586 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.586 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.586 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.586 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.586 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.586 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.587 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.587 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.587 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.587 14:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.523 14:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:35.459 14:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.394 14:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.653 [2024-11-20 14:46:43.523944] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:36.653 [2024-11-20 14:46:43.523979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.653 [2024-11-20 14:46:43.523988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.653 [2024-11-20 14:46:43.523995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.653 [2024-11-20 14:46:43.524000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.653 [2024-11-20 14:46:43.524006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.653 [2024-11-20 14:46:43.524011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.653 [2024-11-20 14:46:43.524017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.653 [2024-11-20 14:46:43.524022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.653 [2024-11-20 14:46:43.524028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.653 [2024-11-20 14:46:43.524033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.653 [2024-11-20 14:46:43.524039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2206050 is same with the state(6) to be set 00:26:36.653 [2024-11-20 14:46:43.533965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2206050 (9): Bad file descriptor 00:26:36.653 [2024-11-20 14:46:43.543998] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:36.653 [2024-11-20 14:46:43.544007] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:36.653 [2024-11-20 14:46:43.544011] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:36.653 [2024-11-20 14:46:43.544015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:36.653 [2024-11-20 14:46:43.544031] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.591 [2024-11-20 14:46:44.554307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:37.591 [2024-11-20 14:46:44.554352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2206050 with addr=10.0.0.2, port=4420 00:26:37.591 [2024-11-20 14:46:44.554365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2206050 is same with the state(6) to be set 00:26:37.591 [2024-11-20 14:46:44.554390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2206050 (9): Bad file descriptor 00:26:37.591 [2024-11-20 14:46:44.554834] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:37.591 [2024-11-20 14:46:44.554863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:37.591 [2024-11-20 14:46:44.554872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:37.591 [2024-11-20 14:46:44.554883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:37.591 [2024-11-20 14:46:44.554892] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:37.591 [2024-11-20 14:46:44.554898] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:37.591 [2024-11-20 14:46:44.554904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:37.591 [2024-11-20 14:46:44.554913] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:37.591 [2024-11-20 14:46:44.554919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.591 14:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.529 [2024-11-20 14:46:45.557292] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:38.529 [2024-11-20 14:46:45.557308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:38.529 [2024-11-20 14:46:45.557317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:38.529 [2024-11-20 14:46:45.557323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:38.529 [2024-11-20 14:46:45.557328] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:38.529 [2024-11-20 14:46:45.557333] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:38.529 [2024-11-20 14:46:45.557337] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:38.529 [2024-11-20 14:46:45.557340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:38.529 [2024-11-20 14:46:45.557356] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:38.529 [2024-11-20 14:46:45.557372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.529 [2024-11-20 14:46:45.557379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.529 [2024-11-20 14:46:45.557386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.529 [2024-11-20 14:46:45.557391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.529 [2024-11-20 14:46:45.557397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.529 [2024-11-20 14:46:45.557402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.529 [2024-11-20 14:46:45.557408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.529 [2024-11-20 14:46:45.557413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.529 [2024-11-20 14:46:45.557422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.529 [2024-11-20 14:46:45.557427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.529 [2024-11-20 14:46:45.557432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:38.529 [2024-11-20 14:46:45.557462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f5380 (9): Bad file descriptor 00:26:38.529 [2024-11-20 14:46:45.558463] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:38.529 [2024-11-20 14:46:45.558472] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:38.529 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.529 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.529 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.529 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.529 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.529 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.529 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.529 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:38.789 14:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.729 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.729 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.729 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.729 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.729 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.729 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.729 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.730 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.730 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:39.730 14:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.665 [2024-11-20 14:46:47.613429] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:40.665 [2024-11-20 14:46:47.613444] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:40.665 [2024-11-20 14:46:47.613454] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.665 [2024-11-20 14:46:47.701703] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:40.925 14:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.925 [2024-11-20 14:46:47.802488] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:40.925 [2024-11-20 14:46:47.803140] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2210700:1 started. 00:26:40.925 [2024-11-20 14:46:47.804046] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:40.925 [2024-11-20 14:46:47.804074] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:40.925 [2024-11-20 14:46:47.804089] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:40.925 [2024-11-20 14:46:47.804100] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:40.925 [2024-11-20 14:46:47.804106] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:40.925 [2024-11-20 14:46:47.851844] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2210700 was disconnected and freed. delete nvme_qpair. 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4049910 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4049910 ']' 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4049910 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.863 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4049910 00:26:41.864 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:41.864 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:41.864 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4049910' 00:26:41.864 killing process with pid 4049910 00:26:41.864 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4049910 00:26:41.864 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4049910 00:26:42.123 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:42.123 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.123 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:42.123 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.123 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:42.123 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.123 14:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.123 rmmod nvme_tcp 00:26:42.123 rmmod nvme_fabrics 00:26:42.123 rmmod nvme_keyring 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 4049884 ']' 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 4049884 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4049884 ']' 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4049884 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4049884 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4049884' 00:26:42.123 killing process with pid 4049884 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4049884 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4049884 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.123 14:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.658 00:26:44.658 real 0m21.538s 00:26:44.658 user 0m27.423s 00:26:44.658 sys 0m5.362s 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.658 ************************************ 00:26:44.658 END TEST nvmf_discovery_remove_ifc 00:26:44.658 ************************************ 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.658 ************************************ 00:26:44.658 START TEST nvmf_identify_kernel_target 00:26:44.658 ************************************ 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:44.658 * Looking for test storage... 00:26:44.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:44.658 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:44.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.659 --rc genhtml_branch_coverage=1 00:26:44.659 --rc genhtml_function_coverage=1 00:26:44.659 --rc genhtml_legend=1 00:26:44.659 --rc geninfo_all_blocks=1 00:26:44.659 --rc geninfo_unexecuted_blocks=1 00:26:44.659 00:26:44.659 ' 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:44.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.659 --rc genhtml_branch_coverage=1 00:26:44.659 --rc genhtml_function_coverage=1 00:26:44.659 --rc genhtml_legend=1 00:26:44.659 --rc geninfo_all_blocks=1 00:26:44.659 --rc geninfo_unexecuted_blocks=1 00:26:44.659 00:26:44.659 ' 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:44.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.659 --rc genhtml_branch_coverage=1 00:26:44.659 --rc genhtml_function_coverage=1 00:26:44.659 --rc genhtml_legend=1 00:26:44.659 --rc geninfo_all_blocks=1 00:26:44.659 --rc geninfo_unexecuted_blocks=1 00:26:44.659 00:26:44.659 ' 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:44.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.659 --rc genhtml_branch_coverage=1 00:26:44.659 --rc genhtml_function_coverage=1 00:26:44.659 --rc genhtml_legend=1 00:26:44.659 --rc geninfo_all_blocks=1 00:26:44.659 --rc geninfo_unexecuted_blocks=1 00:26:44.659 00:26:44.659 ' 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:44.660 14:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:49.940 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:49.940 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:49.940 Found net devices under 0000:31:00.0: cvl_0_0 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:49.940 Found net devices under 0000:31:00.1: cvl_0_1 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.940 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:49.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:26:49.941 00:26:49.941 --- 10.0.0.2 ping statistics --- 00:26:49.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.941 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:26:49.941 00:26:49.941 --- 10.0.0.1 ping statistics --- 00:26:49.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.941 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:49.941 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:52.482 Waiting for block devices as requested 00:26:52.482 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:52.482 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:52.482 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:52.482 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:52.482 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:52.482 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:52.482 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:52.741 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:52.741 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:52.741 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:53.001 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:53.001 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:53.001 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:53.001 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:53.261 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:53.261 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:53.261 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:53.261 No valid GPT data, bailing 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:26:53.261 00:26:53.261 Discovery Log Number of Records 2, Generation counter 2 00:26:53.261 =====Discovery Log Entry 0====== 00:26:53.261 trtype: tcp 00:26:53.261 adrfam: ipv4 00:26:53.261 subtype: current discovery subsystem 00:26:53.261 treq: not specified, sq flow control disable supported 00:26:53.261 portid: 1 00:26:53.261 trsvcid: 4420 00:26:53.261 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:53.261 traddr: 10.0.0.1 00:26:53.261 eflags: none 00:26:53.261 sectype: none 00:26:53.261 =====Discovery Log Entry 1====== 00:26:53.261 trtype: tcp 00:26:53.261 adrfam: ipv4 00:26:53.261 subtype: nvme subsystem 00:26:53.261 treq: not specified, sq flow control disable supported 00:26:53.261 portid: 1 00:26:53.261 trsvcid: 4420 00:26:53.261 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:53.261 traddr: 10.0.0.1 00:26:53.261 eflags: none 00:26:53.261 sectype: none 00:26:53.261 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:53.261 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:53.521 ===================================================== 00:26:53.521 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:53.521 ===================================================== 00:26:53.521 Controller Capabilities/Features 00:26:53.521 ================================ 00:26:53.521 Vendor ID: 0000 00:26:53.521 Subsystem Vendor ID: 0000 00:26:53.521 Serial Number: 0073087391e1922de393 00:26:53.521 Model Number: Linux 00:26:53.521 Firmware Version: 6.8.9-20 00:26:53.521 Recommended Arb Burst: 0 00:26:53.521 IEEE OUI Identifier: 00 00 00 00:26:53.521 Multi-path I/O 00:26:53.521 May have multiple subsystem ports: No 00:26:53.521 May have multiple controllers: No 00:26:53.521 Associated with SR-IOV VF: No 00:26:53.521 Max Data Transfer Size: Unlimited 00:26:53.521 Max Number of Namespaces: 0 00:26:53.521 Max Number of I/O Queues: 1024 00:26:53.521 NVMe Specification Version (VS): 1.3 00:26:53.521 NVMe Specification Version (Identify): 1.3 00:26:53.521 Maximum Queue Entries: 1024 00:26:53.521 Contiguous Queues Required: No 00:26:53.521 Arbitration Mechanisms Supported 00:26:53.521 Weighted Round Robin: Not Supported 00:26:53.521 Vendor Specific: Not Supported 00:26:53.521 Reset Timeout: 7500 ms 00:26:53.521 Doorbell Stride: 4 bytes 00:26:53.521 NVM Subsystem Reset: Not Supported 00:26:53.521 Command Sets Supported 00:26:53.521 NVM Command Set: Supported 00:26:53.521 Boot Partition: Not Supported 00:26:53.521 Memory Page Size Minimum: 4096 bytes 00:26:53.521 Memory Page Size Maximum: 4096 bytes 00:26:53.521 Persistent Memory Region: Not Supported 00:26:53.521 Optional Asynchronous Events Supported 00:26:53.521 Namespace Attribute Notices: Not Supported 00:26:53.521 Firmware Activation Notices: Not Supported 00:26:53.521 ANA Change Notices: Not Supported 00:26:53.521 PLE Aggregate Log Change Notices: Not Supported 00:26:53.521 LBA Status Info Alert Notices: Not Supported 00:26:53.521 EGE Aggregate Log Change Notices: Not Supported 00:26:53.521 Normal NVM Subsystem Shutdown event: Not Supported 00:26:53.521 Zone Descriptor Change Notices: Not Supported 00:26:53.521 Discovery Log Change Notices: Supported 00:26:53.521 Controller Attributes 00:26:53.521 128-bit Host Identifier: Not Supported 00:26:53.521 Non-Operational Permissive Mode: Not Supported 00:26:53.521 NVM Sets: Not Supported 00:26:53.521 Read Recovery Levels: Not Supported 00:26:53.521 Endurance Groups: Not Supported 00:26:53.521 Predictable Latency Mode: Not Supported 00:26:53.521 Traffic Based Keep ALive: Not Supported 00:26:53.521 Namespace Granularity: Not Supported 00:26:53.521 SQ Associations: Not Supported 00:26:53.521 UUID List: Not Supported 00:26:53.521 Multi-Domain Subsystem: Not Supported 00:26:53.521 Fixed Capacity Management: Not Supported 00:26:53.521 Variable Capacity Management: Not Supported 00:26:53.521 Delete Endurance Group: Not Supported 00:26:53.521 Delete NVM Set: Not Supported 00:26:53.521 Extended LBA Formats Supported: Not Supported 00:26:53.521 Flexible Data Placement Supported: Not Supported 00:26:53.521 00:26:53.521 Controller Memory Buffer Support 00:26:53.521 ================================ 00:26:53.521 Supported: No 00:26:53.521 00:26:53.521 Persistent Memory Region Support 00:26:53.521 ================================ 00:26:53.521 Supported: No 00:26:53.521 00:26:53.521 Admin Command Set Attributes 00:26:53.521 ============================ 00:26:53.521 Security Send/Receive: Not Supported 00:26:53.521 Format NVM: Not Supported 00:26:53.521 Firmware Activate/Download: Not Supported 00:26:53.521 Namespace Management: Not Supported 00:26:53.521 Device Self-Test: Not Supported 00:26:53.521 Directives: Not Supported 00:26:53.521 NVMe-MI: Not Supported 00:26:53.521 Virtualization Management: Not Supported 00:26:53.521 Doorbell Buffer Config: Not Supported 00:26:53.521 Get LBA Status Capability: Not Supported 00:26:53.522 Command & Feature Lockdown Capability: Not Supported 00:26:53.522 Abort Command Limit: 1 00:26:53.522 Async Event Request Limit: 1 00:26:53.522 Number of Firmware Slots: N/A 00:26:53.522 Firmware Slot 1 Read-Only: N/A 00:26:53.522 Firmware Activation Without Reset: N/A 00:26:53.522 Multiple Update Detection Support: N/A 00:26:53.522 Firmware Update Granularity: No Information Provided 00:26:53.522 Per-Namespace SMART Log: No 00:26:53.522 Asymmetric Namespace Access Log Page: Not Supported 00:26:53.522 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:53.522 Command Effects Log Page: Not Supported 00:26:53.522 Get Log Page Extended Data: Supported 00:26:53.522 Telemetry Log Pages: Not Supported 00:26:53.522 Persistent Event Log Pages: Not Supported 00:26:53.522 Supported Log Pages Log Page: May Support 00:26:53.522 Commands Supported & Effects Log Page: Not Supported 00:26:53.522 Feature Identifiers & Effects Log Page:May Support 00:26:53.522 NVMe-MI Commands & Effects Log Page: May Support 00:26:53.522 Data Area 4 for Telemetry Log: Not Supported 00:26:53.522 Error Log Page Entries Supported: 1 00:26:53.522 Keep Alive: Not Supported 00:26:53.522 00:26:53.522 NVM Command Set Attributes 00:26:53.522 ========================== 00:26:53.522 Submission Queue Entry Size 00:26:53.522 Max: 1 00:26:53.522 Min: 1 00:26:53.522 Completion Queue Entry Size 00:26:53.522 Max: 1 00:26:53.522 Min: 1 00:26:53.522 Number of Namespaces: 0 00:26:53.522 Compare Command: Not Supported 00:26:53.522 Write Uncorrectable Command: Not Supported 00:26:53.522 Dataset Management Command: Not Supported 00:26:53.522 Write Zeroes Command: Not Supported 00:26:53.522 Set Features Save Field: Not Supported 00:26:53.522 Reservations: Not Supported 00:26:53.522 Timestamp: Not Supported 00:26:53.522 Copy: Not Supported 00:26:53.522 Volatile Write Cache: Not Present 00:26:53.522 Atomic Write Unit (Normal): 1 00:26:53.522 Atomic Write Unit (PFail): 1 00:26:53.522 Atomic Compare & Write Unit: 1 00:26:53.522 Fused Compare & Write: Not Supported 00:26:53.522 Scatter-Gather List 00:26:53.522 SGL Command Set: Supported 00:26:53.522 SGL Keyed: Not Supported 00:26:53.522 SGL Bit Bucket Descriptor: Not Supported 00:26:53.522 SGL Metadata Pointer: Not Supported 00:26:53.522 Oversized SGL: Not Supported 00:26:53.522 SGL Metadata Address: Not Supported 00:26:53.522 SGL Offset: Supported 00:26:53.522 Transport SGL Data Block: Not Supported 00:26:53.522 Replay Protected Memory Block: Not Supported 00:26:53.522 00:26:53.522 Firmware Slot Information 00:26:53.522 ========================= 00:26:53.522 Active slot: 0 00:26:53.522 00:26:53.522 00:26:53.522 Error Log 00:26:53.522 ========= 00:26:53.522 00:26:53.522 Active Namespaces 00:26:53.522 ================= 00:26:53.522 Discovery Log Page 00:26:53.522 ================== 00:26:53.522 Generation Counter: 2 00:26:53.522 Number of Records: 2 00:26:53.522 Record Format: 0 00:26:53.522 00:26:53.522 Discovery Log Entry 0 00:26:53.522 ---------------------- 00:26:53.522 Transport Type: 3 (TCP) 00:26:53.522 Address Family: 1 (IPv4) 00:26:53.522 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:53.522 Entry Flags: 00:26:53.522 Duplicate Returned Information: 0 00:26:53.522 Explicit Persistent Connection Support for Discovery: 0 00:26:53.522 Transport Requirements: 00:26:53.522 Secure Channel: Not Specified 00:26:53.522 Port ID: 1 (0x0001) 00:26:53.522 Controller ID: 65535 (0xffff) 00:26:53.522 Admin Max SQ Size: 32 00:26:53.522 Transport Service Identifier: 4420 00:26:53.522 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:53.522 Transport Address: 10.0.0.1 00:26:53.522 Discovery Log Entry 1 00:26:53.522 ---------------------- 00:26:53.522 Transport Type: 3 (TCP) 00:26:53.522 Address Family: 1 (IPv4) 00:26:53.522 Subsystem Type: 2 (NVM Subsystem) 00:26:53.522 Entry Flags: 00:26:53.522 Duplicate Returned Information: 0 00:26:53.522 Explicit Persistent Connection Support for Discovery: 0 00:26:53.522 Transport Requirements: 00:26:53.522 Secure Channel: Not Specified 00:26:53.522 Port ID: 1 (0x0001) 00:26:53.522 Controller ID: 65535 (0xffff) 00:26:53.522 Admin Max SQ Size: 32 00:26:53.522 Transport Service Identifier: 4420 00:26:53.522 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:53.522 Transport Address: 10.0.0.1 00:26:53.522 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:53.522 get_feature(0x01) failed 00:26:53.522 get_feature(0x02) failed 00:26:53.522 get_feature(0x04) failed 00:26:53.522 ===================================================== 00:26:53.522 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:53.522 ===================================================== 00:26:53.522 Controller Capabilities/Features 00:26:53.522 ================================ 00:26:53.522 Vendor ID: 0000 00:26:53.522 Subsystem Vendor ID: 0000 00:26:53.522 Serial Number: 372d40ca0e38dc003a73 00:26:53.522 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:53.522 Firmware Version: 6.8.9-20 00:26:53.522 Recommended Arb Burst: 6 00:26:53.522 IEEE OUI Identifier: 00 00 00 00:26:53.522 Multi-path I/O 00:26:53.522 May have multiple subsystem ports: Yes 00:26:53.522 May have multiple controllers: Yes 00:26:53.522 Associated with SR-IOV VF: No 00:26:53.522 Max Data Transfer Size: Unlimited 00:26:53.522 Max Number of Namespaces: 1024 00:26:53.522 Max Number of I/O Queues: 128 00:26:53.522 NVMe Specification Version (VS): 1.3 00:26:53.522 NVMe Specification Version (Identify): 1.3 00:26:53.522 Maximum Queue Entries: 1024 00:26:53.522 Contiguous Queues Required: No 00:26:53.522 Arbitration Mechanisms Supported 00:26:53.522 Weighted Round Robin: Not Supported 00:26:53.522 Vendor Specific: Not Supported 00:26:53.522 Reset Timeout: 7500 ms 00:26:53.522 Doorbell Stride: 4 bytes 00:26:53.522 NVM Subsystem Reset: Not Supported 00:26:53.522 Command Sets Supported 00:26:53.522 NVM Command Set: Supported 00:26:53.522 Boot Partition: Not Supported 00:26:53.522 Memory Page Size Minimum: 4096 bytes 00:26:53.522 Memory Page Size Maximum: 4096 bytes 00:26:53.522 Persistent Memory Region: Not Supported 00:26:53.522 Optional Asynchronous Events Supported 00:26:53.522 Namespace Attribute Notices: Supported 00:26:53.522 Firmware Activation Notices: Not Supported 00:26:53.522 ANA Change Notices: Supported 00:26:53.522 PLE Aggregate Log Change Notices: Not Supported 00:26:53.522 LBA Status Info Alert Notices: Not Supported 00:26:53.522 EGE Aggregate Log Change Notices: Not Supported 00:26:53.522 Normal NVM Subsystem Shutdown event: Not Supported 00:26:53.522 Zone Descriptor Change Notices: Not Supported 00:26:53.522 Discovery Log Change Notices: Not Supported 00:26:53.522 Controller Attributes 00:26:53.522 128-bit Host Identifier: Supported 00:26:53.522 Non-Operational Permissive Mode: Not Supported 00:26:53.522 NVM Sets: Not Supported 00:26:53.522 Read Recovery Levels: Not Supported 00:26:53.522 Endurance Groups: Not Supported 00:26:53.522 Predictable Latency Mode: Not Supported 00:26:53.522 Traffic Based Keep ALive: Supported 00:26:53.522 Namespace Granularity: Not Supported 00:26:53.522 SQ Associations: Not Supported 00:26:53.522 UUID List: Not Supported 00:26:53.522 Multi-Domain Subsystem: Not Supported 00:26:53.522 Fixed Capacity Management: Not Supported 00:26:53.522 Variable Capacity Management: Not Supported 00:26:53.522 Delete Endurance Group: Not Supported 00:26:53.522 Delete NVM Set: Not Supported 00:26:53.522 Extended LBA Formats Supported: Not Supported 00:26:53.522 Flexible Data Placement Supported: Not Supported 00:26:53.522 00:26:53.522 Controller Memory Buffer Support 00:26:53.522 ================================ 00:26:53.522 Supported: No 00:26:53.522 00:26:53.522 Persistent Memory Region Support 00:26:53.522 ================================ 00:26:53.522 Supported: No 00:26:53.522 00:26:53.522 Admin Command Set Attributes 00:26:53.522 ============================ 00:26:53.522 Security Send/Receive: Not Supported 00:26:53.522 Format NVM: Not Supported 00:26:53.522 Firmware Activate/Download: Not Supported 00:26:53.522 Namespace Management: Not Supported 00:26:53.522 Device Self-Test: Not Supported 00:26:53.522 Directives: Not Supported 00:26:53.522 NVMe-MI: Not Supported 00:26:53.522 Virtualization Management: Not Supported 00:26:53.522 Doorbell Buffer Config: Not Supported 00:26:53.522 Get LBA Status Capability: Not Supported 00:26:53.522 Command & Feature Lockdown Capability: Not Supported 00:26:53.522 Abort Command Limit: 4 00:26:53.522 Async Event Request Limit: 4 00:26:53.522 Number of Firmware Slots: N/A 00:26:53.523 Firmware Slot 1 Read-Only: N/A 00:26:53.523 Firmware Activation Without Reset: N/A 00:26:53.523 Multiple Update Detection Support: N/A 00:26:53.523 Firmware Update Granularity: No Information Provided 00:26:53.523 Per-Namespace SMART Log: Yes 00:26:53.523 Asymmetric Namespace Access Log Page: Supported 00:26:53.523 ANA Transition Time : 10 sec 00:26:53.523 00:26:53.523 Asymmetric Namespace Access Capabilities 00:26:53.523 ANA Optimized State : Supported 00:26:53.523 ANA Non-Optimized State : Supported 00:26:53.523 ANA Inaccessible State : Supported 00:26:53.523 ANA Persistent Loss State : Supported 00:26:53.523 ANA Change State : Supported 00:26:53.523 ANAGRPID is not changed : No 00:26:53.523 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:53.523 00:26:53.523 ANA Group Identifier Maximum : 128 00:26:53.523 Number of ANA Group Identifiers : 128 00:26:53.523 Max Number of Allowed Namespaces : 1024 00:26:53.523 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:53.523 Command Effects Log Page: Supported 00:26:53.523 Get Log Page Extended Data: Supported 00:26:53.523 Telemetry Log Pages: Not Supported 00:26:53.523 Persistent Event Log Pages: Not Supported 00:26:53.523 Supported Log Pages Log Page: May Support 00:26:53.523 Commands Supported & Effects Log Page: Not Supported 00:26:53.523 Feature Identifiers & Effects Log Page:May Support 00:26:53.523 NVMe-MI Commands & Effects Log Page: May Support 00:26:53.523 Data Area 4 for Telemetry Log: Not Supported 00:26:53.523 Error Log Page Entries Supported: 128 00:26:53.523 Keep Alive: Supported 00:26:53.523 Keep Alive Granularity: 1000 ms 00:26:53.523 00:26:53.523 NVM Command Set Attributes 00:26:53.523 ========================== 00:26:53.523 Submission Queue Entry Size 00:26:53.523 Max: 64 00:26:53.523 Min: 64 00:26:53.523 Completion Queue Entry Size 00:26:53.523 Max: 16 00:26:53.523 Min: 16 00:26:53.523 Number of Namespaces: 1024 00:26:53.523 Compare Command: Not Supported 00:26:53.523 Write Uncorrectable Command: Not Supported 00:26:53.523 Dataset Management Command: Supported 00:26:53.523 Write Zeroes Command: Supported 00:26:53.523 Set Features Save Field: Not Supported 00:26:53.523 Reservations: Not Supported 00:26:53.523 Timestamp: Not Supported 00:26:53.523 Copy: Not Supported 00:26:53.523 Volatile Write Cache: Present 00:26:53.523 Atomic Write Unit (Normal): 1 00:26:53.523 Atomic Write Unit (PFail): 1 00:26:53.523 Atomic Compare & Write Unit: 1 00:26:53.523 Fused Compare & Write: Not Supported 00:26:53.523 Scatter-Gather List 00:26:53.523 SGL Command Set: Supported 00:26:53.523 SGL Keyed: Not Supported 00:26:53.523 SGL Bit Bucket Descriptor: Not Supported 00:26:53.523 SGL Metadata Pointer: Not Supported 00:26:53.523 Oversized SGL: Not Supported 00:26:53.523 SGL Metadata Address: Not Supported 00:26:53.523 SGL Offset: Supported 00:26:53.523 Transport SGL Data Block: Not Supported 00:26:53.523 Replay Protected Memory Block: Not Supported 00:26:53.523 00:26:53.523 Firmware Slot Information 00:26:53.523 ========================= 00:26:53.523 Active slot: 0 00:26:53.523 00:26:53.523 Asymmetric Namespace Access 00:26:53.523 =========================== 00:26:53.523 Change Count : 0 00:26:53.523 Number of ANA Group Descriptors : 1 00:26:53.523 ANA Group Descriptor : 0 00:26:53.523 ANA Group ID : 1 00:26:53.523 Number of NSID Values : 1 00:26:53.523 Change Count : 0 00:26:53.523 ANA State : 1 00:26:53.523 Namespace Identifier : 1 00:26:53.523 00:26:53.523 Commands Supported and Effects 00:26:53.523 ============================== 00:26:53.523 Admin Commands 00:26:53.523 -------------- 00:26:53.523 Get Log Page (02h): Supported 00:26:53.523 Identify (06h): Supported 00:26:53.523 Abort (08h): Supported 00:26:53.523 Set Features (09h): Supported 00:26:53.523 Get Features (0Ah): Supported 00:26:53.523 Asynchronous Event Request (0Ch): Supported 00:26:53.523 Keep Alive (18h): Supported 00:26:53.523 I/O Commands 00:26:53.523 ------------ 00:26:53.523 Flush (00h): Supported 00:26:53.523 Write (01h): Supported LBA-Change 00:26:53.523 Read (02h): Supported 00:26:53.523 Write Zeroes (08h): Supported LBA-Change 00:26:53.523 Dataset Management (09h): Supported 00:26:53.523 00:26:53.523 Error Log 00:26:53.523 ========= 00:26:53.523 Entry: 0 00:26:53.523 Error Count: 0x3 00:26:53.523 Submission Queue Id: 0x0 00:26:53.523 Command Id: 0x5 00:26:53.523 Phase Bit: 0 00:26:53.523 Status Code: 0x2 00:26:53.523 Status Code Type: 0x0 00:26:53.523 Do Not Retry: 1 00:26:53.523 Error Location: 0x28 00:26:53.523 LBA: 0x0 00:26:53.523 Namespace: 0x0 00:26:53.523 Vendor Log Page: 0x0 00:26:53.523 ----------- 00:26:53.523 Entry: 1 00:26:53.523 Error Count: 0x2 00:26:53.523 Submission Queue Id: 0x0 00:26:53.523 Command Id: 0x5 00:26:53.523 Phase Bit: 0 00:26:53.523 Status Code: 0x2 00:26:53.523 Status Code Type: 0x0 00:26:53.523 Do Not Retry: 1 00:26:53.523 Error Location: 0x28 00:26:53.523 LBA: 0x0 00:26:53.523 Namespace: 0x0 00:26:53.523 Vendor Log Page: 0x0 00:26:53.523 ----------- 00:26:53.523 Entry: 2 00:26:53.523 Error Count: 0x1 00:26:53.523 Submission Queue Id: 0x0 00:26:53.523 Command Id: 0x4 00:26:53.523 Phase Bit: 0 00:26:53.523 Status Code: 0x2 00:26:53.523 Status Code Type: 0x0 00:26:53.523 Do Not Retry: 1 00:26:53.523 Error Location: 0x28 00:26:53.523 LBA: 0x0 00:26:53.523 Namespace: 0x0 00:26:53.523 Vendor Log Page: 0x0 00:26:53.523 00:26:53.523 Number of Queues 00:26:53.523 ================ 00:26:53.523 Number of I/O Submission Queues: 128 00:26:53.523 Number of I/O Completion Queues: 128 00:26:53.523 00:26:53.523 ZNS Specific Controller Data 00:26:53.523 ============================ 00:26:53.523 Zone Append Size Limit: 0 00:26:53.523 00:26:53.523 00:26:53.523 Active Namespaces 00:26:53.523 ================= 00:26:53.523 get_feature(0x05) failed 00:26:53.523 Namespace ID:1 00:26:53.523 Command Set Identifier: NVM (00h) 00:26:53.523 Deallocate: Supported 00:26:53.523 Deallocated/Unwritten Error: Not Supported 00:26:53.523 Deallocated Read Value: Unknown 00:26:53.523 Deallocate in Write Zeroes: Not Supported 00:26:53.523 Deallocated Guard Field: 0xFFFF 00:26:53.523 Flush: Supported 00:26:53.523 Reservation: Not Supported 00:26:53.523 Namespace Sharing Capabilities: Multiple Controllers 00:26:53.523 Size (in LBAs): 3750748848 (1788GiB) 00:26:53.523 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:53.523 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:53.523 UUID: 9ff7a9cf-7c64-4d52-8142-d721a2d6bbba 00:26:53.523 Thin Provisioning: Not Supported 00:26:53.523 Per-NS Atomic Units: Yes 00:26:53.523 Atomic Write Unit (Normal): 8 00:26:53.523 Atomic Write Unit (PFail): 8 00:26:53.523 Preferred Write Granularity: 8 00:26:53.523 Atomic Compare & Write Unit: 8 00:26:53.523 Atomic Boundary Size (Normal): 0 00:26:53.523 Atomic Boundary Size (PFail): 0 00:26:53.523 Atomic Boundary Offset: 0 00:26:53.523 NGUID/EUI64 Never Reused: No 00:26:53.523 ANA group ID: 1 00:26:53.523 Namespace Write Protected: No 00:26:53.523 Number of LBA Formats: 1 00:26:53.523 Current LBA Format: LBA Format #00 00:26:53.523 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:53.523 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.523 rmmod nvme_tcp 00:26:53.523 rmmod nvme_fabrics 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:53.523 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:53.524 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.524 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.524 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.524 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.524 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.524 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.524 14:47:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.431 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:55.431 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:55.431 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:55.431 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:55.431 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:55.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:55.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:55.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:55.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:55.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:55.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:58.230 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:58.230 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:00.138 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:00.138 00:27:00.138 real 0m15.473s 00:27:00.138 user 0m3.384s 00:27:00.138 sys 0m7.458s 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:00.138 ************************************ 00:27:00.138 END TEST nvmf_identify_kernel_target 00:27:00.138 ************************************ 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.138 ************************************ 00:27:00.138 START TEST nvmf_auth_host 00:27:00.138 ************************************ 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:00.138 * Looking for test storage... 00:27:00.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:00.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.138 --rc genhtml_branch_coverage=1 00:27:00.138 --rc genhtml_function_coverage=1 00:27:00.138 --rc genhtml_legend=1 00:27:00.138 --rc geninfo_all_blocks=1 00:27:00.138 --rc geninfo_unexecuted_blocks=1 00:27:00.138 00:27:00.138 ' 00:27:00.138 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:00.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.138 --rc genhtml_branch_coverage=1 00:27:00.138 --rc genhtml_function_coverage=1 00:27:00.138 --rc genhtml_legend=1 00:27:00.138 --rc geninfo_all_blocks=1 00:27:00.138 --rc geninfo_unexecuted_blocks=1 00:27:00.138 00:27:00.138 ' 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:00.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.139 --rc genhtml_branch_coverage=1 00:27:00.139 --rc genhtml_function_coverage=1 00:27:00.139 --rc genhtml_legend=1 00:27:00.139 --rc geninfo_all_blocks=1 00:27:00.139 --rc geninfo_unexecuted_blocks=1 00:27:00.139 00:27:00.139 ' 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:00.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.139 --rc genhtml_branch_coverage=1 00:27:00.139 --rc genhtml_function_coverage=1 00:27:00.139 --rc genhtml_legend=1 00:27:00.139 --rc geninfo_all_blocks=1 00:27:00.139 --rc geninfo_unexecuted_blocks=1 00:27:00.139 00:27:00.139 ' 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:00.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.139 14:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:05.419 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:05.419 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:05.419 Found net devices under 0000:31:00.0: cvl_0_0 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:05.419 Found net devices under 0000:31:00.1: cvl_0_1 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.419 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:27:05.420 00:27:05.420 --- 10.0.0.2 ping statistics --- 00:27:05.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.420 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:27:05.420 00:27:05.420 --- 10.0.0.1 ping statistics --- 00:27:05.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.420 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=4065014 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 4065014 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4065014 ']' 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.420 14:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=01f8e7d23a4a1a2425930b783411166b 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.SdJ 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 01f8e7d23a4a1a2425930b783411166b 0 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 01f8e7d23a4a1a2425930b783411166b 0 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=01f8e7d23a4a1a2425930b783411166b 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:06.357 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.SdJ 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.SdJ 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.SdJ 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e7df7a2d1741674fca981794540cfbef5e6d477158b5dc9b2c296b422d02a32b 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.S7n 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e7df7a2d1741674fca981794540cfbef5e6d477158b5dc9b2c296b422d02a32b 3 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e7df7a2d1741674fca981794540cfbef5e6d477158b5dc9b2c296b422d02a32b 3 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e7df7a2d1741674fca981794540cfbef5e6d477158b5dc9b2c296b422d02a32b 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.S7n 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.S7n 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.S7n 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=71302abdcf0a565b0cca09bc02a5136c840985a005ab7309 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1SX 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 71302abdcf0a565b0cca09bc02a5136c840985a005ab7309 0 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 71302abdcf0a565b0cca09bc02a5136c840985a005ab7309 0 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=71302abdcf0a565b0cca09bc02a5136c840985a005ab7309 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1SX 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1SX 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.1SX 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=56bb53edee590ac3f1b981b48094295d3dd49c930ee62e9b 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.T9t 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 56bb53edee590ac3f1b981b48094295d3dd49c930ee62e9b 2 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 56bb53edee590ac3f1b981b48094295d3dd49c930ee62e9b 2 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=56bb53edee590ac3f1b981b48094295d3dd49c930ee62e9b 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:06.358 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.T9t 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.T9t 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.T9t 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1cbea7ba0d3e8104fe280a7e00eddca7 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oAr 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1cbea7ba0d3e8104fe280a7e00eddca7 1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1cbea7ba0d3e8104fe280a7e00eddca7 1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1cbea7ba0d3e8104fe280a7e00eddca7 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oAr 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oAr 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.oAr 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b4a8fa97846bde2072ccba6b684526d3 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.icE 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b4a8fa97846bde2072ccba6b684526d3 1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b4a8fa97846bde2072ccba6b684526d3 1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b4a8fa97846bde2072ccba6b684526d3 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.icE 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.icE 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.icE 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=372902fe948b219441f1ccdd1f6be253ad6dc28666af4e61 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AQM 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 372902fe948b219441f1ccdd1f6be253ad6dc28666af4e61 2 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 372902fe948b219441f1ccdd1f6be253ad6dc28666af4e61 2 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=372902fe948b219441f1ccdd1f6be253ad6dc28666af4e61 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AQM 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AQM 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.AQM 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=602ff5515fbadb0e71e592981298301b 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fTE 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 602ff5515fbadb0e71e592981298301b 0 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 602ff5515fbadb0e71e592981298301b 0 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=602ff5515fbadb0e71e592981298301b 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fTE 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fTE 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fTE 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=65ca15c353886b6e37ac156479f312139aea24aafabd83b3493c182e83639fa5 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pHh 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 65ca15c353886b6e37ac156479f312139aea24aafabd83b3493c182e83639fa5 3 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 65ca15c353886b6e37ac156479f312139aea24aafabd83b3493c182e83639fa5 3 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=65ca15c353886b6e37ac156479f312139aea24aafabd83b3493c182e83639fa5 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:06.619 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pHh 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pHh 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.pHh 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4065014 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4065014 ']' 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SdJ 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.S7n ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.S7n 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.1SX 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.T9t ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.T9t 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.oAr 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.icE ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.icE 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.AQM 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fTE ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fTE 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pHh 00:27:06.880 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.881 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:07.141 14:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:09.678 Waiting for block devices as requested 00:27:09.678 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:09.678 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:09.678 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:09.678 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:09.678 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:09.678 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:09.678 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:09.678 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:09.678 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:09.937 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:09.937 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:09.937 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:10.196 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:10.196 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:10.196 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:10.196 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:10.196 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:10.767 No valid GPT data, bailing 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:27:10.767 00:27:10.767 Discovery Log Number of Records 2, Generation counter 2 00:27:10.767 =====Discovery Log Entry 0====== 00:27:10.767 trtype: tcp 00:27:10.767 adrfam: ipv4 00:27:10.767 subtype: current discovery subsystem 00:27:10.767 treq: not specified, sq flow control disable supported 00:27:10.767 portid: 1 00:27:10.767 trsvcid: 4420 00:27:10.767 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:10.767 traddr: 10.0.0.1 00:27:10.767 eflags: none 00:27:10.767 sectype: none 00:27:10.767 =====Discovery Log Entry 1====== 00:27:10.767 trtype: tcp 00:27:10.767 adrfam: ipv4 00:27:10.767 subtype: nvme subsystem 00:27:10.767 treq: not specified, sq flow control disable supported 00:27:10.767 portid: 1 00:27:10.767 trsvcid: 4420 00:27:10.767 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:10.767 traddr: 10.0.0.1 00:27:10.767 eflags: none 00:27:10.767 sectype: none 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.767 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.028 nvme0n1 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.028 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.029 14:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.029 nvme0n1 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.029 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.289 nvme0n1 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.289 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.550 nvme0n1 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.550 nvme0n1 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.550 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.551 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.812 nvme0n1 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.812 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.073 nvme0n1 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.073 14:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.073 nvme0n1 00:27:12.073 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.073 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.073 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.073 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.073 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.073 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.333 nvme0n1 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.333 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.593 nvme0n1 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.593 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.854 nvme0n1 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.854 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.114 nvme0n1 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.114 14:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.375 nvme0n1 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.375 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.635 nvme0n1 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.635 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.636 nvme0n1 00:27:13.636 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.896 nvme0n1 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.896 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.156 14:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.156 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.418 nvme0n1 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.418 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.678 nvme0n1 00:27:14.678 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.678 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.678 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.678 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.937 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.937 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.937 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.937 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.938 14:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.198 nvme0n1 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.198 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.199 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.199 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.199 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.199 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.199 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.199 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.199 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.767 nvme0n1 00:27:15.767 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.767 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.767 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.767 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.767 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.768 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.028 nvme0n1 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.028 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.029 14:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.599 nvme0n1 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.599 14:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.168 nvme0n1 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.168 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.737 nvme0n1 00:27:17.737 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.737 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.737 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.737 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.737 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.737 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:17.997 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.998 14:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.568 nvme0n1 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.568 14:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.138 nvme0n1 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:19.138 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.139 nvme0n1 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.139 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.399 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.400 nvme0n1 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.400 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.661 nvme0n1 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.661 nvme0n1 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.661 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.926 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.927 nvme0n1 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.927 14:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.275 nvme0n1 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.275 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.276 nvme0n1 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.276 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.601 nvme0n1 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.601 nvme0n1 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.601 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.860 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.861 nvme0n1 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.861 14:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.120 nvme0n1 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.121 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.381 nvme0n1 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.381 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.382 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.641 nvme0n1 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.641 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.642 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.642 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.642 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.642 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.642 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.642 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.642 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.642 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.642 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.901 nvme0n1 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.901 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.902 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.902 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.902 14:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.162 nvme0n1 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.162 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.738 nvme0n1 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.738 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.998 nvme0n1 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.998 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.999 14:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.258 nvme0n1 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.258 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:23.517 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.518 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.777 nvme0n1 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.777 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.778 14:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 nvme0n1 00:27:24.037 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.037 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.037 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.037 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.037 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:24.297 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.298 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.868 nvme0n1 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.868 14:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.439 nvme0n1 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.439 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.009 nvme0n1 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:26.009 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.010 14:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 nvme0n1 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.580 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.581 14:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.150 nvme0n1 00:27:27.150 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.150 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.150 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.150 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.150 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.150 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.410 nvme0n1 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.410 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.411 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.411 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.670 nvme0n1 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.670 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.671 nvme0n1 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.671 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.931 nvme0n1 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.931 14:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.191 nvme0n1 00:27:28.191 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.191 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.191 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.191 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.191 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.192 nvme0n1 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.192 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.453 nvme0n1 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.453 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.454 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.713 nvme0n1 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.713 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.973 nvme0n1 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.973 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.974 14:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.974 nvme0n1 00:27:28.974 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.974 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.974 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.974 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.974 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.974 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.234 nvme0n1 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.234 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.495 nvme0n1 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.495 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.755 nvme0n1 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.755 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.756 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.756 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.756 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.015 14:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.015 nvme0n1 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.015 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.275 nvme0n1 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.275 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.536 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.796 nvme0n1 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.796 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.797 14:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.056 nvme0n1 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.056 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.316 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.317 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.317 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.317 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.317 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.317 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.576 nvme0n1 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.576 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.577 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.837 nvme0n1 00:27:31.837 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.837 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.837 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.837 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.837 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.837 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.097 14:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.357 nvme0n1 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDFmOGU3ZDIzYTRhMWEyNDI1OTMwYjc4MzQxMTE2NmKCWWyK: 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: ]] 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdkZjdhMmQxNzQxNjc0ZmNhOTgxNzk0NTQwY2ZiZWY1ZTZkNDc3MTU4YjVkYzliMmMyOTZiNDIyZDAyYTMyYiDpNyk=: 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.357 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.926 nvme0n1 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.926 14:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.495 nvme0n1 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.495 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.754 14:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.322 nvme0n1 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyOTAyZmU5NDhiMjE5NDQxZjFjY2RkMWY2YmUyNTNhZDZkYzI4NjY2YWY0ZTYxzi/m+A==: 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: ]] 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyZmY1NTE1ZmJhZGIwZTcxZTU5Mjk4MTI5ODMwMWJbucg9: 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.322 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.890 nvme0n1 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjVjYTE1YzM1Mzg4NmI2ZTM3YWMxNTY0NzlmMzEyMTM5YWVhMjRhYWZhYmQ4M2IzNDkzYzE4MmU4MzYzOWZhNVpUcQc=: 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.890 14:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.460 nvme0n1 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:35.460 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.461 request: 00:27:35.461 { 00:27:35.461 "name": "nvme0", 00:27:35.461 "trtype": "tcp", 00:27:35.461 "traddr": "10.0.0.1", 00:27:35.461 "adrfam": "ipv4", 00:27:35.461 "trsvcid": "4420", 00:27:35.461 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:35.461 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:35.461 "prchk_reftag": false, 00:27:35.461 "prchk_guard": false, 00:27:35.461 "hdgst": false, 00:27:35.461 "ddgst": false, 00:27:35.461 "allow_unrecognized_csi": false, 00:27:35.461 "method": "bdev_nvme_attach_controller", 00:27:35.461 "req_id": 1 00:27:35.461 } 00:27:35.461 Got JSON-RPC error response 00:27:35.461 response: 00:27:35.461 { 00:27:35.461 "code": -5, 00:27:35.461 "message": "Input/output error" 00:27:35.461 } 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.461 request: 00:27:35.461 { 00:27:35.461 "name": "nvme0", 00:27:35.461 "trtype": "tcp", 00:27:35.461 "traddr": "10.0.0.1", 00:27:35.461 "adrfam": "ipv4", 00:27:35.461 "trsvcid": "4420", 00:27:35.461 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:35.461 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:35.461 "prchk_reftag": false, 00:27:35.461 "prchk_guard": false, 00:27:35.461 "hdgst": false, 00:27:35.461 "ddgst": false, 00:27:35.461 "dhchap_key": "key2", 00:27:35.461 "allow_unrecognized_csi": false, 00:27:35.461 "method": "bdev_nvme_attach_controller", 00:27:35.461 "req_id": 1 00:27:35.461 } 00:27:35.461 Got JSON-RPC error response 00:27:35.461 response: 00:27:35.461 { 00:27:35.461 "code": -5, 00:27:35.461 "message": "Input/output error" 00:27:35.461 } 00:27:35.461 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:35.462 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:35.462 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:35.462 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:35.462 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:35.462 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.462 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:35.462 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.462 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.722 request: 00:27:35.722 { 00:27:35.722 "name": "nvme0", 00:27:35.722 "trtype": "tcp", 00:27:35.722 "traddr": "10.0.0.1", 00:27:35.722 "adrfam": "ipv4", 00:27:35.722 "trsvcid": "4420", 00:27:35.722 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:35.722 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:35.722 "prchk_reftag": false, 00:27:35.722 "prchk_guard": false, 00:27:35.722 "hdgst": false, 00:27:35.722 "ddgst": false, 00:27:35.722 "dhchap_key": "key1", 00:27:35.722 "dhchap_ctrlr_key": "ckey2", 00:27:35.722 "allow_unrecognized_csi": false, 00:27:35.722 "method": "bdev_nvme_attach_controller", 00:27:35.722 "req_id": 1 00:27:35.722 } 00:27:35.722 Got JSON-RPC error response 00:27:35.722 response: 00:27:35.722 { 00:27:35.722 "code": -5, 00:27:35.722 "message": "Input/output error" 00:27:35.722 } 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.722 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.723 nvme0n1 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.723 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.983 request: 00:27:35.983 { 00:27:35.983 "name": "nvme0", 00:27:35.983 "dhchap_key": "key1", 00:27:35.983 "dhchap_ctrlr_key": "ckey2", 00:27:35.983 "method": "bdev_nvme_set_keys", 00:27:35.983 "req_id": 1 00:27:35.983 } 00:27:35.983 Got JSON-RPC error response 00:27:35.983 response: 00:27:35.983 { 00:27:35.983 "code": -13, 00:27:35.983 "message": "Permission denied" 00:27:35.983 } 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:35.983 14:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzEzMDJhYmRjZjBhNTY1YjBjY2EwOWJjMDJhNTEzNmM4NDA5ODVhMDA1YWI3MzA5fr2wzg==: 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: ]] 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZiYjUzZWRlZTU5MGFjM2YxYjk4MWI0ODA5NDI5NWQzZGQ0OWM5MzBlZTYyZTliNxc/8Q==: 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.922 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.182 nvme0n1 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWNiZWE3YmEwZDNlODEwNGZlMjgwYTdlMDBlZGRjYTcLozGh: 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: ]] 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjRhOGZhOTc4NDZiZGUyMDcyY2NiYTZiNjg0NTI2ZDNFbTRk: 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.182 14:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.182 request: 00:27:37.182 { 00:27:37.182 "name": "nvme0", 00:27:37.182 "dhchap_key": "key2", 00:27:37.182 "dhchap_ctrlr_key": "ckey1", 00:27:37.182 "method": "bdev_nvme_set_keys", 00:27:37.182 "req_id": 1 00:27:37.182 } 00:27:37.182 Got JSON-RPC error response 00:27:37.182 response: 00:27:37.182 { 00:27:37.182 "code": -13, 00:27:37.182 "message": "Permission denied" 00:27:37.182 } 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:37.182 14:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:38.120 rmmod nvme_tcp 00:27:38.120 rmmod nvme_fabrics 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 4065014 ']' 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 4065014 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 4065014 ']' 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 4065014 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.120 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4065014 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4065014' 00:27:38.379 killing process with pid 4065014 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 4065014 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 4065014 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.379 14:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:40.915 14:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:42.821 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:42.821 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:42.821 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.SdJ /tmp/spdk.key-null.1SX /tmp/spdk.key-sha256.oAr /tmp/spdk.key-sha384.AQM /tmp/spdk.key-sha512.pHh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:42.821 14:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:45.361 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:45.361 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:45.361 00:27:45.361 real 0m45.479s 00:27:45.361 user 0m39.828s 00:27:45.361 sys 0m10.777s 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.361 ************************************ 00:27:45.361 END TEST nvmf_auth_host 00:27:45.361 ************************************ 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.361 ************************************ 00:27:45.361 START TEST nvmf_digest 00:27:45.361 ************************************ 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:45.361 * Looking for test storage... 00:27:45.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:27:45.361 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:45.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.623 --rc genhtml_branch_coverage=1 00:27:45.623 --rc genhtml_function_coverage=1 00:27:45.623 --rc genhtml_legend=1 00:27:45.623 --rc geninfo_all_blocks=1 00:27:45.623 --rc geninfo_unexecuted_blocks=1 00:27:45.623 00:27:45.623 ' 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:45.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.623 --rc genhtml_branch_coverage=1 00:27:45.623 --rc genhtml_function_coverage=1 00:27:45.623 --rc genhtml_legend=1 00:27:45.623 --rc geninfo_all_blocks=1 00:27:45.623 --rc geninfo_unexecuted_blocks=1 00:27:45.623 00:27:45.623 ' 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:45.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.623 --rc genhtml_branch_coverage=1 00:27:45.623 --rc genhtml_function_coverage=1 00:27:45.623 --rc genhtml_legend=1 00:27:45.623 --rc geninfo_all_blocks=1 00:27:45.623 --rc geninfo_unexecuted_blocks=1 00:27:45.623 00:27:45.623 ' 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:45.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.623 --rc genhtml_branch_coverage=1 00:27:45.623 --rc genhtml_function_coverage=1 00:27:45.623 --rc genhtml_legend=1 00:27:45.623 --rc geninfo_all_blocks=1 00:27:45.623 --rc geninfo_unexecuted_blocks=1 00:27:45.623 00:27:45.623 ' 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.623 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.624 14:47:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.903 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:50.904 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:50.904 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:50.904 Found net devices under 0000:31:00.0: cvl_0_0 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:50.904 Found net devices under 0000:31:00.1: cvl_0_1 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:50.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:27:50.904 00:27:50.904 --- 10.0.0.2 ping statistics --- 00:27:50.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.904 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:27:50.904 00:27:50.904 --- 10.0.0.1 ping statistics --- 00:27:50.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.904 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.904 ************************************ 00:27:50.904 START TEST nvmf_digest_clean 00:27:50.904 ************************************ 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:50.904 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=4080929 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 4080929 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4080929 ']' 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.905 14:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:51.164 [2024-11-20 14:47:57.979231] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:27:51.164 [2024-11-20 14:47:57.979288] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.164 [2024-11-20 14:47:58.063024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.164 [2024-11-20 14:47:58.098373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.164 [2024-11-20 14:47:58.098406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.164 [2024-11-20 14:47:58.098416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.164 [2024-11-20 14:47:58.098423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.164 [2024-11-20 14:47:58.098428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.164 [2024-11-20 14:47:58.099057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.733 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.733 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:51.733 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:51.733 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.733 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.733 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.733 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:51.733 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:51.734 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:51.734 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.734 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.993 null0 00:27:51.994 [2024-11-20 14:47:58.893655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.994 [2024-11-20 14:47:58.917937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4081269 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4081269 /var/tmp/bperf.sock 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4081269 ']' 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.994 14:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:51.994 [2024-11-20 14:47:58.960919] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:27:51.994 [2024-11-20 14:47:58.960984] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081269 ] 00:27:51.994 [2024-11-20 14:47:59.044471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.254 [2024-11-20 14:47:59.096606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.822 14:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.822 14:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:52.822 14:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:52.822 14:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:52.822 14:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:53.114 14:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.114 14:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.373 nvme0n1 00:27:53.373 14:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:53.373 14:48:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.373 Running I/O for 2 seconds... 00:27:55.687 21799.00 IOPS, 85.15 MiB/s [2024-11-20T13:48:02.747Z] 24424.50 IOPS, 95.41 MiB/s 00:27:55.687 Latency(us) 00:27:55.687 [2024-11-20T13:48:02.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.687 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:55.687 nvme0n1 : 2.00 24440.07 95.47 0.00 0.00 5233.14 2157.23 14964.05 00:27:55.687 [2024-11-20T13:48:02.747Z] =================================================================================================================== 00:27:55.687 [2024-11-20T13:48:02.747Z] Total : 24440.07 95.47 0.00 0.00 5233.14 2157.23 14964.05 00:27:55.687 { 00:27:55.687 "results": [ 00:27:55.687 { 00:27:55.687 "job": "nvme0n1", 00:27:55.687 "core_mask": "0x2", 00:27:55.687 "workload": "randread", 00:27:55.687 "status": "finished", 00:27:55.687 "queue_depth": 128, 00:27:55.687 "io_size": 4096, 00:27:55.687 "runtime": 2.003963, 00:27:55.687 "iops": 24440.071997337276, 00:27:55.687 "mibps": 95.46903123959873, 00:27:55.687 "io_failed": 0, 00:27:55.687 "io_timeout": 0, 00:27:55.687 "avg_latency_us": 5233.144504563366, 00:27:55.687 "min_latency_us": 2157.2266666666665, 00:27:55.687 "max_latency_us": 14964.053333333333 00:27:55.687 } 00:27:55.687 ], 00:27:55.687 "core_count": 1 00:27:55.687 } 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:55.687 | select(.opcode=="crc32c") 00:27:55.687 | "\(.module_name) \(.executed)"' 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4081269 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4081269 ']' 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4081269 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4081269 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:55.687 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4081269' 00:27:55.688 killing process with pid 4081269 00:27:55.688 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4081269 00:27:55.688 Received shutdown signal, test time was about 2.000000 seconds 00:27:55.688 00:27:55.688 Latency(us) 00:27:55.688 [2024-11-20T13:48:02.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.688 [2024-11-20T13:48:02.748Z] =================================================================================================================== 00:27:55.688 [2024-11-20T13:48:02.748Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.688 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4081269 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4082063 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4082063 /var/tmp/bperf.sock 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4082063 ']' 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:55.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:55.947 [2024-11-20 14:48:02.796502] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:27:55.947 [2024-11-20 14:48:02.796562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082063 ] 00:27:55.947 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:55.947 Zero copy mechanism will not be used. 00:27:55.947 [2024-11-20 14:48:02.860191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.947 [2024-11-20 14:48:02.888975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:55.947 14:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:56.207 14:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.207 14:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.467 nvme0n1 00:27:56.467 14:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:56.467 14:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.467 Zero copy mechanism will not be used. 00:27:56.467 Running I/O for 2 seconds... 00:27:58.789 5328.00 IOPS, 666.00 MiB/s [2024-11-20T13:48:05.849Z] 6032.00 IOPS, 754.00 MiB/s 00:27:58.789 Latency(us) 00:27:58.789 [2024-11-20T13:48:05.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.789 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:58.789 nvme0n1 : 2.00 6035.48 754.43 0.00 0.00 2648.40 467.63 9939.63 00:27:58.789 [2024-11-20T13:48:05.850Z] =================================================================================================================== 00:27:58.790 [2024-11-20T13:48:05.850Z] Total : 6035.48 754.43 0.00 0.00 2648.40 467.63 9939.63 00:27:58.790 { 00:27:58.790 "results": [ 00:27:58.790 { 00:27:58.790 "job": "nvme0n1", 00:27:58.790 "core_mask": "0x2", 00:27:58.790 "workload": "randread", 00:27:58.790 "status": "finished", 00:27:58.790 "queue_depth": 16, 00:27:58.790 "io_size": 131072, 00:27:58.790 "runtime": 2.001498, 00:27:58.790 "iops": 6035.479425909994, 00:27:58.790 "mibps": 754.4349282387492, 00:27:58.790 "io_failed": 0, 00:27:58.790 "io_timeout": 0, 00:27:58.790 "avg_latency_us": 2648.3958675496688, 00:27:58.790 "min_latency_us": 467.62666666666667, 00:27:58.790 "max_latency_us": 9939.626666666667 00:27:58.790 } 00:27:58.790 ], 00:27:58.790 "core_count": 1 00:27:58.790 } 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:58.790 | select(.opcode=="crc32c") 00:27:58.790 | "\(.module_name) \(.executed)"' 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4082063 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4082063 ']' 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4082063 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4082063 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4082063' 00:27:58.790 killing process with pid 4082063 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4082063 00:27:58.790 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.790 00:27:58.790 Latency(us) 00:27:58.790 [2024-11-20T13:48:05.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.790 [2024-11-20T13:48:05.850Z] =================================================================================================================== 00:27:58.790 [2024-11-20T13:48:05.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4082063 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4082737 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4082737 /var/tmp/bperf.sock 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4082737 ']' 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:58.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.790 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:58.790 [2024-11-20 14:48:05.829609] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:27:58.790 [2024-11-20 14:48:05.829664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082737 ] 00:27:59.050 [2024-11-20 14:48:05.894817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.050 [2024-11-20 14:48:05.924194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.050 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.050 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:59.050 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:59.050 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:59.050 14:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:59.310 14:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.310 14:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.569 nvme0n1 00:27:59.569 14:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:59.569 14:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:59.569 Running I/O for 2 seconds... 00:28:01.888 30445.00 IOPS, 118.93 MiB/s [2024-11-20T13:48:08.948Z] 30463.00 IOPS, 119.00 MiB/s 00:28:01.888 Latency(us) 00:28:01.888 [2024-11-20T13:48:08.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.888 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:01.888 nvme0n1 : 2.01 30477.45 119.05 0.00 0.00 4194.20 1966.08 7755.09 00:28:01.888 [2024-11-20T13:48:08.948Z] =================================================================================================================== 00:28:01.888 [2024-11-20T13:48:08.948Z] Total : 30477.45 119.05 0.00 0.00 4194.20 1966.08 7755.09 00:28:01.888 { 00:28:01.888 "results": [ 00:28:01.888 { 00:28:01.888 "job": "nvme0n1", 00:28:01.888 "core_mask": "0x2", 00:28:01.888 "workload": "randwrite", 00:28:01.888 "status": "finished", 00:28:01.888 "queue_depth": 128, 00:28:01.888 "io_size": 4096, 00:28:01.888 "runtime": 2.006303, 00:28:01.888 "iops": 30477.450315331233, 00:28:01.888 "mibps": 119.05254029426263, 00:28:01.888 "io_failed": 0, 00:28:01.888 "io_timeout": 0, 00:28:01.888 "avg_latency_us": 4194.203208224988, 00:28:01.888 "min_latency_us": 1966.08, 00:28:01.888 "max_latency_us": 7755.093333333333 00:28:01.888 } 00:28:01.888 ], 00:28:01.888 "core_count": 1 00:28:01.888 } 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:01.888 | select(.opcode=="crc32c") 00:28:01.888 | "\(.module_name) \(.executed)"' 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4082737 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4082737 ']' 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4082737 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4082737 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4082737' 00:28:01.888 killing process with pid 4082737 00:28:01.888 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4082737 00:28:01.888 Received shutdown signal, test time was about 2.000000 seconds 00:28:01.888 00:28:01.888 Latency(us) 00:28:01.888 [2024-11-20T13:48:08.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.889 [2024-11-20T13:48:08.949Z] =================================================================================================================== 00:28:01.889 [2024-11-20T13:48:08.949Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4082737 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4083861 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4083861 /var/tmp/bperf.sock 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4083861 ']' 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:01.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.889 14:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:02.148 [2024-11-20 14:48:08.950342] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:02.148 [2024-11-20 14:48:08.950390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4083861 ] 00:28:02.148 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.148 Zero copy mechanism will not be used. 00:28:02.148 [2024-11-20 14:48:09.006423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.148 [2024-11-20 14:48:09.035884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.148 14:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.148 14:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:02.148 14:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:02.148 14:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:02.148 14:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:02.406 14:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.406 14:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.665 nvme0n1 00:28:02.665 14:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:02.665 14:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.665 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.665 Zero copy mechanism will not be used. 00:28:02.665 Running I/O for 2 seconds... 00:28:05.057 4909.00 IOPS, 613.62 MiB/s [2024-11-20T13:48:12.117Z] 4316.50 IOPS, 539.56 MiB/s 00:28:05.057 Latency(us) 00:28:05.057 [2024-11-20T13:48:12.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.057 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:05.057 nvme0n1 : 2.01 4313.90 539.24 0.00 0.00 3702.58 1167.36 11851.09 00:28:05.057 [2024-11-20T13:48:12.117Z] =================================================================================================================== 00:28:05.057 [2024-11-20T13:48:12.117Z] Total : 4313.90 539.24 0.00 0.00 3702.58 1167.36 11851.09 00:28:05.057 { 00:28:05.057 "results": [ 00:28:05.057 { 00:28:05.057 "job": "nvme0n1", 00:28:05.057 "core_mask": "0x2", 00:28:05.057 "workload": "randwrite", 00:28:05.057 "status": "finished", 00:28:05.057 "queue_depth": 16, 00:28:05.057 "io_size": 131072, 00:28:05.057 "runtime": 2.005611, 00:28:05.057 "iops": 4313.897360953844, 00:28:05.057 "mibps": 539.2371701192305, 00:28:05.057 "io_failed": 0, 00:28:05.057 "io_timeout": 0, 00:28:05.057 "avg_latency_us": 3702.577827092002, 00:28:05.057 "min_latency_us": 1167.36, 00:28:05.057 "max_latency_us": 11851.093333333334 00:28:05.057 } 00:28:05.057 ], 00:28:05.057 "core_count": 1 00:28:05.057 } 00:28:05.057 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:05.057 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:05.058 | select(.opcode=="crc32c") 00:28:05.058 | "\(.module_name) \(.executed)"' 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4083861 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4083861 ']' 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4083861 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4083861 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4083861' 00:28:05.058 killing process with pid 4083861 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4083861 00:28:05.058 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.058 00:28:05.058 Latency(us) 00:28:05.058 [2024-11-20T13:48:12.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.058 [2024-11-20T13:48:12.118Z] =================================================================================================================== 00:28:05.058 [2024-11-20T13:48:12.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4083861 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4080929 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4080929 ']' 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4080929 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.058 14:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4080929 00:28:05.058 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.058 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.058 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4080929' 00:28:05.058 killing process with pid 4080929 00:28:05.058 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4080929 00:28:05.058 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4080929 00:28:05.340 00:28:05.340 real 0m14.177s 00:28:05.340 user 0m27.571s 00:28:05.340 sys 0m3.019s 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.340 ************************************ 00:28:05.340 END TEST nvmf_digest_clean 00:28:05.340 ************************************ 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.340 ************************************ 00:28:05.340 START TEST nvmf_digest_error 00:28:05.340 ************************************ 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=4084568 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 4084568 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4084568 ']' 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.340 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:05.340 [2024-11-20 14:48:12.204145] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:05.341 [2024-11-20 14:48:12.204194] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.341 [2024-11-20 14:48:12.273359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.341 [2024-11-20 14:48:12.301886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.341 [2024-11-20 14:48:12.301912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.341 [2024-11-20 14:48:12.301918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.341 [2024-11-20 14:48:12.301923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.341 [2024-11-20 14:48:12.301927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.341 [2024-11-20 14:48:12.302393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.341 [2024-11-20 14:48:12.354734] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.341 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.612 null0 00:28:05.612 [2024-11-20 14:48:12.430075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.612 [2024-11-20 14:48:12.454283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4084596 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4084596 /var/tmp/bperf.sock 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4084596 ']' 00:28:05.612 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.613 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.613 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.613 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.613 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.613 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:05.613 [2024-11-20 14:48:12.492611] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:05.613 [2024-11-20 14:48:12.492659] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084596 ] 00:28:05.613 [2024-11-20 14:48:12.557653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.613 [2024-11-20 14:48:12.587898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.613 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.613 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:05.613 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:05.613 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:05.873 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:05.873 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.873 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.873 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.873 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.873 14:48:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.133 nvme0n1 00:28:06.133 14:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:06.133 14:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.133 14:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.133 14:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.133 14:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:06.133 14:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.133 Running I/O for 2 seconds... 00:28:06.133 [2024-11-20 14:48:13.146545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.133 [2024-11-20 14:48:13.146575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.133 [2024-11-20 14:48:13.146584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.133 [2024-11-20 14:48:13.156710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.133 [2024-11-20 14:48:13.156731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.133 [2024-11-20 14:48:13.156739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.133 [2024-11-20 14:48:13.165731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.133 [2024-11-20 14:48:13.165750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.133 [2024-11-20 14:48:13.165757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.133 [2024-11-20 14:48:13.177456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.133 [2024-11-20 14:48:13.177475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.133 [2024-11-20 14:48:13.177482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.133 [2024-11-20 14:48:13.184755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.133 [2024-11-20 14:48:13.184774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.133 [2024-11-20 14:48:13.184780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.195690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.195710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.195717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.205176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.205194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.205200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.213405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.213423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.213430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.223020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.223047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.223054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.231445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.231463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.231469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.240809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.240826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.240833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.249945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.249962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.249969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.257544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.257563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.257571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.268163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.268181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.268188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.277984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.278002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.278009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.287054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.287071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.287078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.296023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.296040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.296046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.304719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.393 [2024-11-20 14:48:13.304736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.393 [2024-11-20 14:48:13.304743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.393 [2024-11-20 14:48:13.315307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.315325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.315332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.326926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.326943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.326950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.334931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.334949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.334955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.346652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.346670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.346677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.357059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.357076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.357083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.366037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.366055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.366062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.374600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.374617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.374624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.383967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.383984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.383994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.392859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.392877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.392883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.403130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.403148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.403156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.412375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.412392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.412398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.422029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.422046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.422053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.431118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.431136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.431143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.440439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.440457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.440465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.394 [2024-11-20 14:48:13.449249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.394 [2024-11-20 14:48:13.449266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.394 [2024-11-20 14:48:13.449273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.654 [2024-11-20 14:48:13.458515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.654 [2024-11-20 14:48:13.458534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.654 [2024-11-20 14:48:13.458540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.654 [2024-11-20 14:48:13.466501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.654 [2024-11-20 14:48:13.466521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.654 [2024-11-20 14:48:13.466528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.654 [2024-11-20 14:48:13.475335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.654 [2024-11-20 14:48:13.475353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.654 [2024-11-20 14:48:13.475359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.654 [2024-11-20 14:48:13.485606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.654 [2024-11-20 14:48:13.485623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.654 [2024-11-20 14:48:13.485630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.654 [2024-11-20 14:48:13.493911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.654 [2024-11-20 14:48:13.493929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.654 [2024-11-20 14:48:13.493935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.654 [2024-11-20 14:48:13.502644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.654 [2024-11-20 14:48:13.502662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.654 [2024-11-20 14:48:13.502669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.654 [2024-11-20 14:48:13.512076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.654 [2024-11-20 14:48:13.512093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.654 [2024-11-20 14:48:13.512100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.654 [2024-11-20 14:48:13.520464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.654 [2024-11-20 14:48:13.520481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.654 [2024-11-20 14:48:13.520488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.654 [2024-11-20 14:48:13.529517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.654 [2024-11-20 14:48:13.529535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.529541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.537840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.537858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.537867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.546972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.546990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.546997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.556422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.556440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.556447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.564202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.564219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.564226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.575261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.575279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.575286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.586311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.586329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.586336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.597521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.597539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.597545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.609116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.609134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.609141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.616931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.616949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.616955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.626377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.626398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.626404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.635724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.635742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.635749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.644827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.644845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.644851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.653441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.653459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.653465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.663889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.663908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.663915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.674557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.674575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.674582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.682689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.682707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.682713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.691607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.691625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.691631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.700783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.700801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.700808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.655 [2024-11-20 14:48:13.710590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.655 [2024-11-20 14:48:13.710608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.655 [2024-11-20 14:48:13.710615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.916 [2024-11-20 14:48:13.719804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.916 [2024-11-20 14:48:13.719821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.916 [2024-11-20 14:48:13.719828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.916 [2024-11-20 14:48:13.727047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.916 [2024-11-20 14:48:13.727065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.916 [2024-11-20 14:48:13.727071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.916 [2024-11-20 14:48:13.736998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.916 [2024-11-20 14:48:13.737016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.916 [2024-11-20 14:48:13.737022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.916 [2024-11-20 14:48:13.747275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.916 [2024-11-20 14:48:13.747293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.916 [2024-11-20 14:48:13.747300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.916 [2024-11-20 14:48:13.757925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.916 [2024-11-20 14:48:13.757942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.916 [2024-11-20 14:48:13.757949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.916 [2024-11-20 14:48:13.766667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.916 [2024-11-20 14:48:13.766685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.916 [2024-11-20 14:48:13.766692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.916 [2024-11-20 14:48:13.775963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.916 [2024-11-20 14:48:13.775980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.916 [2024-11-20 14:48:13.775987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.916 [2024-11-20 14:48:13.784066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.916 [2024-11-20 14:48:13.784083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.916 [2024-11-20 14:48:13.784093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.916 [2024-11-20 14:48:13.792841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.916 [2024-11-20 14:48:13.792859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.792866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.802106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.802124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.802131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.812568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.812586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.812593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.822789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.822806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.822813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.831503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.831521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.831527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.843425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.843443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.843450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.854405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.854422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.854428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.862483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.862501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.862507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.874481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.874501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.874508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.885583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.885602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.885608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.894159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.894177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.894183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.903213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.903232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.903238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.912843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.912860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.912867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.921343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.921360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.921367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.930294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.930311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.930318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.939758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.939775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.939782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.947549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.947567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.947573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.957479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.957496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.957503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.965647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.965663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.965670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.917 [2024-11-20 14:48:13.974898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:06.917 [2024-11-20 14:48:13.974916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.917 [2024-11-20 14:48:13.974923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:13.984617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:13.984635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:13.984642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:13.995102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:13.995119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:13.995127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.003120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.003137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.003144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.013310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.013329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.013335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.022077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.022095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.022101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.032001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.032018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.032029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.041230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.041253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.041260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.050152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.050170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.050177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.058170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.058188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.058195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.067869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.067888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.067895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.077451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.077469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.077475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.087583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.087602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.087608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.096190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.096208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.177 [2024-11-20 14:48:14.096215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.177 [2024-11-20 14:48:14.105509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.177 [2024-11-20 14:48:14.105527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.105534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.114879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.114900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.114907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.122148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.122166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.122172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 26946.00 IOPS, 105.26 MiB/s [2024-11-20T13:48:14.238Z] [2024-11-20 14:48:14.133321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.133339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.133346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.145007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.145025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.145032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.152910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.152928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.152934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.163205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.163224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.163230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.173733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.173751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.173758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.183703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.183721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.183728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.193003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.193021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.193031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.203781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.203799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.203806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.215655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.215673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.215680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.223654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.223671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.223678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.178 [2024-11-20 14:48:14.233795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.178 [2024-11-20 14:48:14.233813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.178 [2024-11-20 14:48:14.233819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.242482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.242500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.242507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.252599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.252616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.252624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.261883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.261901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.261908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.270617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.270634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.270641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.282175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.282196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.282203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.290760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.290778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.290785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.300740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.300757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.300764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.310345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.310363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.310370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.319010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.319028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.319035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.327882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.327900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.327907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.337980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.337998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.338005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.346530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.346548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.346555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.355853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.355870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.355877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.364955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.364973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-11-20 14:48:14.364979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.438 [2024-11-20 14:48:14.373274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.438 [2024-11-20 14:48:14.373293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.373299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.383168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.383186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.383193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.391061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.391078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.391085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.399727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.399745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.399752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.410000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.410017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.410024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.418403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.418420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.418427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.426967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.426984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.426991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.436427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.436445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.436457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.445288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.445305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.445312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.454168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.454186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.454192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.462678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.462695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.462702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.470942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.470960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.470967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.480424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.480441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.480448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.439 [2024-11-20 14:48:14.489443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.439 [2024-11-20 14:48:14.489461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.439 [2024-11-20 14:48:14.489467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-11-20 14:48:14.499513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.699 [2024-11-20 14:48:14.499531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-11-20 14:48:14.499538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-11-20 14:48:14.509506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.699 [2024-11-20 14:48:14.509524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-11-20 14:48:14.509531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-11-20 14:48:14.517587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.699 [2024-11-20 14:48:14.517608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-11-20 14:48:14.517615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-11-20 14:48:14.526063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.699 [2024-11-20 14:48:14.526081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-11-20 14:48:14.526088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-11-20 14:48:14.535981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.535998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.536005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.546391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.546409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.546416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.555439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.555457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.555463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.565045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.565063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.565069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.573199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.573215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.573222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.582582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.582599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.582606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.590682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.590700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.590707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.601222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.601239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.601250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.609313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.609331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.609338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.619664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.619681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.619688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.628379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.628397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.628404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.637667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.637685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.637691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.647439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.647456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.647462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.657633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.657650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.657657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.666194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.666211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.666218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.676873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.676890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.676900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.686060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.686078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.686084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.693773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.693790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.693797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.704354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.704371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.704379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.715037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.715054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.715060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.726840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.726858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.726864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.738637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.738654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.738661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.747549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.747567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.747573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.700 [2024-11-20 14:48:14.756325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.700 [2024-11-20 14:48:14.756343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.700 [2024-11-20 14:48:14.756349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.764873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.764890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.961 [2024-11-20 14:48:14.764897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.773737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.773754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.961 [2024-11-20 14:48:14.773761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.783126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.783144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.961 [2024-11-20 14:48:14.783150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.791477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.791494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.961 [2024-11-20 14:48:14.791501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.799828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.799846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.961 [2024-11-20 14:48:14.799852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.809068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.809086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.961 [2024-11-20 14:48:14.809092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.819243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.819263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.961 [2024-11-20 14:48:14.819269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.828451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.828469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.961 [2024-11-20 14:48:14.828476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.836878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.836896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.961 [2024-11-20 14:48:14.836906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.961 [2024-11-20 14:48:14.846096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.961 [2024-11-20 14:48:14.846114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.846120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.857434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.857452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.857458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.867686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.867704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.867710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.876640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.876656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.876663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.886949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.886966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.886973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.897544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.897561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.897568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.906821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.906838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.906845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.918602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.918619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.918626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.926375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.926395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.926401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.935885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.935902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.935909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.944971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.944988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.944995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.952986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.953003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.953010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.962601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.962619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.962625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.973968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.973985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.973992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.984424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.984441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.984447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:14.995273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:14.995291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:14.995298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:15.003487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:15.003505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:15.003512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.962 [2024-11-20 14:48:15.013318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:07.962 [2024-11-20 14:48:15.013335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.962 [2024-11-20 14:48:15.013341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.021781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.021798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.021805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.029537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.029554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.029561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.039309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.039326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.039333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.048829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.048847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.048854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.060838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.060856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.060862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.069999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.070017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.070024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.079293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.079311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.079317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.088486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.088504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.088514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.097049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.097067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.097074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.105809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.105826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.105832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.114875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.114892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.114899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 [2024-11-20 14:48:15.124438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.124455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.124462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 27027.50 IOPS, 105.58 MiB/s [2024-11-20T13:48:15.283Z] [2024-11-20 14:48:15.134014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x67cee0) 00:28:08.223 [2024-11-20 14:48:15.134029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-11-20 14:48:15.134036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.223 00:28:08.223 Latency(us) 00:28:08.223 [2024-11-20T13:48:15.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.223 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:08.223 nvme0n1 : 2.00 27049.61 105.66 0.00 0.00 4727.57 2170.88 15728.64 00:28:08.223 [2024-11-20T13:48:15.283Z] =================================================================================================================== 00:28:08.223 [2024-11-20T13:48:15.283Z] Total : 27049.61 105.66 0.00 0.00 4727.57 2170.88 15728.64 00:28:08.223 { 00:28:08.223 "results": [ 00:28:08.223 { 00:28:08.223 "job": "nvme0n1", 00:28:08.223 "core_mask": "0x2", 00:28:08.223 "workload": "randread", 00:28:08.223 "status": "finished", 00:28:08.223 "queue_depth": 128, 00:28:08.223 "io_size": 4096, 00:28:08.223 "runtime": 2.003097, 00:28:08.223 "iops": 27049.61367322701, 00:28:08.223 "mibps": 105.662553411043, 00:28:08.223 "io_failed": 0, 00:28:08.223 "io_timeout": 0, 00:28:08.223 "avg_latency_us": 4727.57042171899, 00:28:08.223 "min_latency_us": 2170.88, 00:28:08.223 "max_latency_us": 15728.64 00:28:08.223 } 00:28:08.223 ], 00:28:08.223 "core_count": 1 00:28:08.223 } 00:28:08.223 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:08.224 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:08.224 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:08.224 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:08.224 | .driver_specific 00:28:08.224 | .nvme_error 00:28:08.224 | .status_code 00:28:08.224 | .command_transient_transport_error' 00:28:08.483 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4084596 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4084596 ']' 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4084596 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4084596 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4084596' 00:28:08.484 killing process with pid 4084596 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4084596 00:28:08.484 Received shutdown signal, test time was about 2.000000 seconds 00:28:08.484 00:28:08.484 Latency(us) 00:28:08.484 [2024-11-20T13:48:15.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.484 [2024-11-20T13:48:15.544Z] =================================================================================================================== 00:28:08.484 [2024-11-20T13:48:15.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4084596 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4085271 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4085271 /var/tmp/bperf.sock 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4085271 ']' 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.484 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:08.484 [2024-11-20 14:48:15.490930] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:08.484 [2024-11-20 14:48:15.490993] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4085271 ] 00:28:08.484 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:08.484 Zero copy mechanism will not be used. 00:28:08.744 [2024-11-20 14:48:15.555612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.744 [2024-11-20 14:48:15.585076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.744 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.744 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:08.744 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:08.744 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:09.004 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:09.004 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.004 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.004 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.004 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.004 14:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.004 nvme0n1 00:28:09.004 14:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:09.004 14:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.004 14:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.004 14:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.004 14:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:09.004 14:48:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.265 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:09.265 Zero copy mechanism will not be used. 00:28:09.265 Running I/O for 2 seconds... 00:28:09.265 [2024-11-20 14:48:16.133664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.265 [2024-11-20 14:48:16.133697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.265 [2024-11-20 14:48:16.133706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.265 [2024-11-20 14:48:16.143406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.143430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.143437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.153308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.153337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.153344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.164330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.164351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.164358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.174842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.174861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.174868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.186161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.186180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.186187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.195662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.195682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.195689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.205947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.205967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.205974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.215714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.215733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.215740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.225656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.225675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.225682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.235333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.235351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.235358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.245373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.245393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.245400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.255926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.255946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.255953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.266256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.266275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.266282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.276548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.276568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.276575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.287229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.287254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.287265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.297774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.297794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.297800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.308649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.308669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.308676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.266 [2024-11-20 14:48:16.319774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.266 [2024-11-20 14:48:16.319793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.266 [2024-11-20 14:48:16.319800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.527 [2024-11-20 14:48:16.330927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.527 [2024-11-20 14:48:16.330951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.527 [2024-11-20 14:48:16.330957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.527 [2024-11-20 14:48:16.341953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.527 [2024-11-20 14:48:16.341973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.527 [2024-11-20 14:48:16.341980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.527 [2024-11-20 14:48:16.352112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.527 [2024-11-20 14:48:16.352132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.527 [2024-11-20 14:48:16.352138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.527 [2024-11-20 14:48:16.363211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.527 [2024-11-20 14:48:16.363230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.527 [2024-11-20 14:48:16.363237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.527 [2024-11-20 14:48:16.372605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.527 [2024-11-20 14:48:16.372624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.527 [2024-11-20 14:48:16.372631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.527 [2024-11-20 14:48:16.383236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.527 [2024-11-20 14:48:16.383260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.527 [2024-11-20 14:48:16.383267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.527 [2024-11-20 14:48:16.390925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.527 [2024-11-20 14:48:16.390943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.390950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.401499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.401519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.401526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.408448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.408466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.408473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.410563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.410580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.410587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.418629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.418648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.418655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.424226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.424251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.424258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.432781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.432800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.432806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.440789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.440808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.440815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.445829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.445848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.445854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.449844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.449862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.449869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.454324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.454342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.454348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.464217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.464237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.464252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.469014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.469033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.469040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.477346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.477364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.477371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.487501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.487519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.487525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.494341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.494359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.494366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.502976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.502994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.503001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.510702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.510721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.510728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.519391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.519409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.519416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.528852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.528870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.528877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.538880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.538903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.538909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.549440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.549460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.549466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.560910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.560930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.560936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.572182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.572201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.572208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.528 [2024-11-20 14:48:16.583859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.528 [2024-11-20 14:48:16.583877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.528 [2024-11-20 14:48:16.583884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.595815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.595834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.595840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.606059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.606077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.606083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.616426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.616445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.616452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.626589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.626608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.626615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.637598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.637617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.637623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.647277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.647295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.647301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.656968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.656987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.656994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.666608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.666628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.666635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.677664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.677683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.677690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.688192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.688210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.688217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.694782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.694801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.694807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.698823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.698842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.698849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.702844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.702866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.702873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.709958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.709977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.709984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.719173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.719192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.719199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.730170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.730190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.730196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.741417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.741436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.741442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.753186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.753205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.753212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.764278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.764297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.764304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.776082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.776102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.789 [2024-11-20 14:48:16.776108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.789 [2024-11-20 14:48:16.786855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.789 [2024-11-20 14:48:16.786874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-11-20 14:48:16.786881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.790 [2024-11-20 14:48:16.797790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.790 [2024-11-20 14:48:16.797809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-11-20 14:48:16.797816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:09.790 [2024-11-20 14:48:16.809394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.790 [2024-11-20 14:48:16.809413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-11-20 14:48:16.809419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:09.790 [2024-11-20 14:48:16.820777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.790 [2024-11-20 14:48:16.820797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-11-20 14:48:16.820803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:09.790 [2024-11-20 14:48:16.831820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.790 [2024-11-20 14:48:16.831839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-11-20 14:48:16.831846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:09.790 [2024-11-20 14:48:16.841757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:09.790 [2024-11-20 14:48:16.841777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.790 [2024-11-20 14:48:16.841783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.852946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.852966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.852973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.864113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.864131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.864138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.875136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.875155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.875162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.886218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.886237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.886256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.897480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.897500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.897507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.909213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.909232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.909239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.920688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.920708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.920715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.931863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.931883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.931890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.943371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.943390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.943397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.954529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.954548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.954555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.963382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.050 [2024-11-20 14:48:16.963402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.050 [2024-11-20 14:48:16.963408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.050 [2024-11-20 14:48:16.971259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:16.971278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:16.971285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:16.977299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:16.977321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:16.977327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:16.982876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:16.982896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:16.982902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:16.992725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:16.992744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:16.992750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.002935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.002955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.002962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.010414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.010434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.010440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.015738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.015758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.015764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.018884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.018903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.018910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.022623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.022642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.022649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.026403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.026422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.026432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.035817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.035836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.035843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.043427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.043446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.043453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.054123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.054143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.054149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.065624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.065643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.065649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.077007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.077025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.077031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.081998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.082017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.082023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.087609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.087629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.087635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.093223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.093242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.093253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.099989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.100011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.100017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.051 [2024-11-20 14:48:17.106689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.051 [2024-11-20 14:48:17.106708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.051 [2024-11-20 14:48:17.106715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.312 [2024-11-20 14:48:17.110783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.312 [2024-11-20 14:48:17.110801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.312 [2024-11-20 14:48:17.110808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.312 [2024-11-20 14:48:17.113965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.312 [2024-11-20 14:48:17.113983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.312 [2024-11-20 14:48:17.113989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.312 [2024-11-20 14:48:17.122052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.312 [2024-11-20 14:48:17.122071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.312 [2024-11-20 14:48:17.122078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.312 3394.00 IOPS, 424.25 MiB/s [2024-11-20T13:48:17.372Z] [2024-11-20 14:48:17.130731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.312 [2024-11-20 14:48:17.130751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.312 [2024-11-20 14:48:17.130759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.312 [2024-11-20 14:48:17.138636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.312 [2024-11-20 14:48:17.138656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.312 [2024-11-20 14:48:17.138663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.312 [2024-11-20 14:48:17.144745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.312 [2024-11-20 14:48:17.144763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.312 [2024-11-20 14:48:17.144770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.312 [2024-11-20 14:48:17.149792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.312 [2024-11-20 14:48:17.149811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.312 [2024-11-20 14:48:17.149817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.312 [2024-11-20 14:48:17.157167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.157186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.157192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.160371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.160389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.160396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.164224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.164242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.164254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.167000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.167018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.167025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.170240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.170269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.170279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.177199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.177218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.177224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.188018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.188037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.188044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.197833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.197852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.197859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.208965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.208985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.208995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.219169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.219188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.219194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.227782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.227801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.227807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.238209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.238228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.238235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.245799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.245818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.245824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.258066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.258085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.258092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.263579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.263598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.263605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.272001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.272020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.272027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.281073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.281093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.281099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.287622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.287648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.287655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.297848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.297869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.297875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.306330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.306350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.306357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.314727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.314747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.314754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.324477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.324497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.324503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.313 [2024-11-20 14:48:17.332722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.313 [2024-11-20 14:48:17.332741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.313 [2024-11-20 14:48:17.332748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.314 [2024-11-20 14:48:17.342918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.314 [2024-11-20 14:48:17.342938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.314 [2024-11-20 14:48:17.342944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.314 [2024-11-20 14:48:17.351193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.314 [2024-11-20 14:48:17.351212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.314 [2024-11-20 14:48:17.351218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.314 [2024-11-20 14:48:17.355663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.314 [2024-11-20 14:48:17.355682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.314 [2024-11-20 14:48:17.355689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.314 [2024-11-20 14:48:17.363092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.314 [2024-11-20 14:48:17.363111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.314 [2024-11-20 14:48:17.363117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.314 [2024-11-20 14:48:17.368338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.314 [2024-11-20 14:48:17.368358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.314 [2024-11-20 14:48:17.368364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.375460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.375480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.375487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.380128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.380147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.380154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.387490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.387509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.387516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.393002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.393021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.393027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.396287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.396306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.396314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.401997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.402017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.402023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.411902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.411925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.411931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.422087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.422105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.422112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.425671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.425690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.425696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.433460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.575 [2024-11-20 14:48:17.433479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.575 [2024-11-20 14:48:17.433485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.575 [2024-11-20 14:48:17.437310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.437329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.437335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.441871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.441890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.441896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.449416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.449435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.449441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.458494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.458514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.458521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.464720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.464739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.464745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.467869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.467888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.467894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.475257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.475276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.475283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.480081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.480100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.480106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.486481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.486500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.486506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.493976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.493996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.494002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.500933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.500953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.500959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.506642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.506662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.506668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.510467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.510487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.510494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.517944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.517965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.517974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.521645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.521664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.521671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.525152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.525172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.525178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.528475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.528494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.528500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.535518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.535537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.535544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.546202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.546221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.546228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.554698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.554718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.554725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.565999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.566019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.566025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.576327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.576345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.576351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.587048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.587071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.587078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.597713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.597733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.597740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.609645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.609664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.609671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.620996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.621015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.621022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.576 [2024-11-20 14:48:17.631987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.576 [2024-11-20 14:48:17.632007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.576 [2024-11-20 14:48:17.632013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.642612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.642633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.642640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.653393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.653413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.653420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.663852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.663871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.663878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.675533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.675553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.675559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.686581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.686599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.686606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.696045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.696063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.696070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.705632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.705651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.705657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.712217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.712235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.712242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.722649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.722668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.722674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.733201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.733219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.733226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.742779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.742797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.742803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.753626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.753645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.753651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.764563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.764582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.764591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.776201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.776219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.776226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.788493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.836 [2024-11-20 14:48:17.788511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.836 [2024-11-20 14:48:17.788518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.836 [2024-11-20 14:48:17.794481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.794499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.794506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 14:48:17.804907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.804925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.804932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 14:48:17.814988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.815007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.815014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 14:48:17.825729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.825748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.825755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 14:48:17.835962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.835980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.835987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 14:48:17.846864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.846883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.846889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 14:48:17.857171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.857189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.857196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 14:48:17.868531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.868550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.868556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 14:48:17.878788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.878807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.878814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.837 [2024-11-20 14:48:17.889423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:10.837 [2024-11-20 14:48:17.889442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.837 [2024-11-20 14:48:17.889449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.898726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.898745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.898752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.907902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.907922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.907929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.918753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.918772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.918778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.929467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.929487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.929493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.938551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.938570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.938580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.947749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.947769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.947775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.957203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.957222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.957229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.967644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.967663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.967669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.976939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.976959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.976965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.987542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.987563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.987569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.097 [2024-11-20 14:48:17.997884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.097 [2024-11-20 14:48:17.997904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.097 [2024-11-20 14:48:17.997911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.007647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.007667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.007673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.017580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.017599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.017606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.027172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.027194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.027201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.038534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.038553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.038560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.049534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.049553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.049560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.061204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.061223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.061230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.071490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.071510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.071516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.081631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.081650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.081656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.091194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.091213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.091219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.101049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.101067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.101074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.111379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.111398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.111405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.098 [2024-11-20 14:48:18.121257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.121276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.121282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.098 3515.50 IOPS, 439.44 MiB/s [2024-11-20T13:48:18.158Z] [2024-11-20 14:48:18.133133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c75a0) 00:28:11.098 [2024-11-20 14:48:18.133153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.098 [2024-11-20 14:48:18.133159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.098 00:28:11.098 Latency(us) 00:28:11.098 [2024-11-20T13:48:18.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.098 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:11.098 nvme0n1 : 2.01 3511.62 438.95 0.00 0.00 4551.66 648.53 16165.55 00:28:11.098 [2024-11-20T13:48:18.158Z] =================================================================================================================== 00:28:11.098 [2024-11-20T13:48:18.158Z] Total : 3511.62 438.95 0.00 0.00 4551.66 648.53 16165.55 00:28:11.098 { 00:28:11.098 "results": [ 00:28:11.098 { 00:28:11.098 "job": "nvme0n1", 00:28:11.098 "core_mask": "0x2", 00:28:11.098 "workload": "randread", 00:28:11.098 "status": "finished", 00:28:11.098 "queue_depth": 16, 00:28:11.098 "io_size": 131072, 00:28:11.098 "runtime": 2.006768, 00:28:11.098 "iops": 3511.6166891240046, 00:28:11.098 "mibps": 438.95208614050057, 00:28:11.098 "io_failed": 0, 00:28:11.098 "io_timeout": 0, 00:28:11.098 "avg_latency_us": 4551.664258076723, 00:28:11.098 "min_latency_us": 648.5333333333333, 00:28:11.098 "max_latency_us": 16165.546666666667 00:28:11.098 } 00:28:11.098 ], 00:28:11.098 "core_count": 1 00:28:11.098 } 00:28:11.098 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:11.098 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:11.098 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:11.098 | .driver_specific 00:28:11.098 | .nvme_error 00:28:11.098 | .status_code 00:28:11.098 | .command_transient_transport_error' 00:28:11.098 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:11.358 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4085271 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4085271 ']' 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4085271 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4085271 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4085271' 00:28:11.359 killing process with pid 4085271 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4085271 00:28:11.359 Received shutdown signal, test time was about 2.000000 seconds 00:28:11.359 00:28:11.359 Latency(us) 00:28:11.359 [2024-11-20T13:48:18.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.359 [2024-11-20T13:48:18.419Z] =================================================================================================================== 00:28:11.359 [2024-11-20T13:48:18.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.359 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4085271 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4085946 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4085946 /var/tmp/bperf.sock 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4085946 ']' 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:11.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:11.619 [2024-11-20 14:48:18.496543] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:11.619 [2024-11-20 14:48:18.496596] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4085946 ] 00:28:11.619 [2024-11-20 14:48:18.561808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.619 [2024-11-20 14:48:18.590104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:11.619 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:11.880 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:11.880 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.880 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.880 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.880 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.880 14:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.140 nvme0n1 00:28:12.140 14:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:12.140 14:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.140 14:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.140 14:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.140 14:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:12.140 14:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.140 Running I/O for 2 seconds... 00:28:12.140 [2024-11-20 14:48:19.187579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eeb760 00:28:12.140 [2024-11-20 14:48:19.188349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.140 [2024-11-20 14:48:19.188380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:12.140 [2024-11-20 14:48:19.196130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee9e10 00:28:12.140 [2024-11-20 14:48:19.196756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.140 [2024-11-20 14:48:19.196778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.205472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ede038 00:28:12.401 [2024-11-20 14:48:19.206460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.206477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.214129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eec840 00:28:12.401 [2024-11-20 14:48:19.214927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.214944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.223783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee6fa8 00:28:12.401 [2024-11-20 14:48:19.224991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.225009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.231828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016edf550 00:28:12.401 [2024-11-20 14:48:19.232728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.232745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.241122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee5ec8 00:28:12.401 [2024-11-20 14:48:19.242337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.242355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.249168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ef7da8 00:28:12.401 [2024-11-20 14:48:19.250058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.250075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.258160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efef90 00:28:12.401 [2024-11-20 14:48:19.259282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.259300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.266656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee88f8 00:28:12.401 [2024-11-20 14:48:19.267559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.267576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.274587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee27f0 00:28:12.401 [2024-11-20 14:48:19.275470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.275487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.284469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efd208 00:28:12.401 [2024-11-20 14:48:19.285586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-11-20 14:48:19.285603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:12.401 [2024-11-20 14:48:19.291695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee6738 00:28:12.401 [2024-11-20 14:48:19.292370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.292386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.300601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee27f0 00:28:12.402 [2024-11-20 14:48:19.301512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.301529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.309310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee6300 00:28:12.402 [2024-11-20 14:48:19.310215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.310235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.318167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ef7538 00:28:12.402 [2024-11-20 14:48:19.319071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.319088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.326599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efc998 00:28:12.402 [2024-11-20 14:48:19.327499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.327516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.334626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee88f8 00:28:12.402 [2024-11-20 14:48:19.335429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.335446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.343048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efc128 00:28:12.402 [2024-11-20 14:48:19.343844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.343862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.353084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee9e10 00:28:12.402 [2024-11-20 14:48:19.354115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.354132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.361600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee9168 00:28:12.402 [2024-11-20 14:48:19.362743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.362760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.369438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee5ec8 00:28:12.402 [2024-11-20 14:48:19.370311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.370328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.377977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016edf988 00:28:12.402 [2024-11-20 14:48:19.378791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.378808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.386524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eed4e8 00:28:12.402 [2024-11-20 14:48:19.387090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.387108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.395529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efeb58 00:28:12.402 [2024-11-20 14:48:19.396211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.396228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.405419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ef0350 00:28:12.402 [2024-11-20 14:48:19.406798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.406815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.411508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efbcf0 00:28:12.402 [2024-11-20 14:48:19.412077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.412094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.421748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee1b48 00:28:12.402 [2024-11-20 14:48:19.422775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.422791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.429895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eeee38 00:28:12.402 [2024-11-20 14:48:19.430920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.430937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.437500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ef3e60 00:28:12.402 [2024-11-20 14:48:19.437939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.437955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.448333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee49b0 00:28:12.402 [2024-11-20 14:48:19.449710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.449727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:12.402 [2024-11-20 14:48:19.454414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016edfdc0 00:28:12.402 [2024-11-20 14:48:19.454981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.402 [2024-11-20 14:48:19.454997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.464099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eeaef0 00:28:12.663 [2024-11-20 14:48:19.465008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-11-20 14:48:19.465025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.472976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee49b0 00:28:12.663 [2024-11-20 14:48:19.473867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-11-20 14:48:19.473883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.481594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee01f8 00:28:12.663 [2024-11-20 14:48:19.482486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-11-20 14:48:19.482503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.490178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ef0bc0 00:28:12.663 [2024-11-20 14:48:19.491067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-11-20 14:48:19.491084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.498765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efeb58 00:28:12.663 [2024-11-20 14:48:19.499658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-11-20 14:48:19.499675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.506761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee6fa8 00:28:12.663 [2024-11-20 14:48:19.507851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-11-20 14:48:19.507868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.514126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee27f0 00:28:12.663 [2024-11-20 14:48:19.514704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-11-20 14:48:19.514720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.523148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee27f0 00:28:12.663 [2024-11-20 14:48:19.523726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-11-20 14:48:19.523742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.532882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee8088 00:28:12.663 [2024-11-20 14:48:19.533812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-11-20 14:48:19.533833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:12.663 [2024-11-20 14:48:19.542892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eef270 00:28:12.663 [2024-11-20 14:48:19.544303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.544318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.548976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee5658 00:28:12.664 [2024-11-20 14:48:19.549578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.549594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.557120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016edfdc0 00:28:12.664 [2024-11-20 14:48:19.557719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.557735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.567430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eebb98 00:28:12.664 [2024-11-20 14:48:19.568467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.568484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.576443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efb480 00:28:12.664 [2024-11-20 14:48:19.577592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.577608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.584037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee49b0 00:28:12.664 [2024-11-20 14:48:19.584610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.584627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.593151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee99d8 00:28:12.664 [2024-11-20 14:48:19.594048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.594064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.601361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee7818 00:28:12.664 [2024-11-20 14:48:19.602166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.602181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.610382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee0630 00:28:12.664 [2024-11-20 14:48:19.611310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.611326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.619399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee8d30 00:28:12.664 [2024-11-20 14:48:19.620442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.620459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.626449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eea680 00:28:12.664 [2024-11-20 14:48:19.627015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.627031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.634426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016edf988 00:28:12.664 [2024-11-20 14:48:19.634993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.635008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.643865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eecc78 00:28:12.664 [2024-11-20 14:48:19.644554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.644570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.652435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eed0b0 00:28:12.664 [2024-11-20 14:48:19.653231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.653249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.661894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016eddc00 00:28:12.664 [2024-11-20 14:48:19.662813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.662830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.670455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ede038 00:28:12.664 [2024-11-20 14:48:19.671486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.671502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.678056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016ee3498 00:28:12.664 [2024-11-20 14:48:19.678502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.678519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.686630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efda78 00:28:12.664 [2024-11-20 14:48:19.687073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.687090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.696000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.664 [2024-11-20 14:48:19.696118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.696134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.704922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.664 [2024-11-20 14:48:19.705038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.705053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.713860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.664 [2024-11-20 14:48:19.713980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.713995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.664 [2024-11-20 14:48:19.722810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.664 [2024-11-20 14:48:19.722927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.664 [2024-11-20 14:48:19.722942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.731714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.731833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.731849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.740623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.740740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.740756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.749522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.749641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.749656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.758429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.758546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.758566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.767342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.767460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.767476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.776242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.776365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.776381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.785152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.785275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.785290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.794033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.794150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.794166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.802968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.803086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.803101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.811889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.812007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.812022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.820809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.820927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.820942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.829702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.829820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.829836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.838596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.838713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.838731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.847503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.847620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.847636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.856438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.856553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.856569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.865335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.926 [2024-11-20 14:48:19.865454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.926 [2024-11-20 14:48:19.865469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.926 [2024-11-20 14:48:19.874230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.874353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.874369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.883132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.883254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.883270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.892034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.892150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.892166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.900966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.901083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.901098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.909873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.909992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.910008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.918788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.918909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.918926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.927692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.927810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.927826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.936591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.936709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.936724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.945512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.945630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.945645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.954416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.954532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.954548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.963328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.963445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.963460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.972226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.972348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.972364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:12.927 [2024-11-20 14:48:19.981140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:12.927 [2024-11-20 14:48:19.981260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.927 [2024-11-20 14:48:19.981276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.188 [2024-11-20 14:48:19.990047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.188 [2024-11-20 14:48:19.990164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.188 [2024-11-20 14:48:19.990180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.188 [2024-11-20 14:48:19.998964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.188 [2024-11-20 14:48:19.999083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.188 [2024-11-20 14:48:19.999099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.188 [2024-11-20 14:48:20.008357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.188 [2024-11-20 14:48:20.008476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.188 [2024-11-20 14:48:20.008493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.188 [2024-11-20 14:48:20.017396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.188 [2024-11-20 14:48:20.017518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.188 [2024-11-20 14:48:20.017534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.188 [2024-11-20 14:48:20.026324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.188 [2024-11-20 14:48:20.026444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.188 [2024-11-20 14:48:20.026460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.188 [2024-11-20 14:48:20.035724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.188 [2024-11-20 14:48:20.035842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.188 [2024-11-20 14:48:20.035858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.188 [2024-11-20 14:48:20.044834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.188 [2024-11-20 14:48:20.044953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.188 [2024-11-20 14:48:20.044969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.188 [2024-11-20 14:48:20.053766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.188 [2024-11-20 14:48:20.053882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.188 [2024-11-20 14:48:20.053897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.188 [2024-11-20 14:48:20.062684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.062801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.062817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.072166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.072293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.072313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.081071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.081190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.081206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.090003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.090123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.090139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.098938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.099058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.099074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.107888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.108007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.108023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.116805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.116925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.116941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.125717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.125835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.125850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.134637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.134758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.134773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.143556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.143675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.143690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.152466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.152590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.152606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.161359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.161476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.161490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.170273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.170389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.170405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.179175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 29024.00 IOPS, 113.38 MiB/s [2024-11-20T13:48:20.249Z] [2024-11-20 14:48:20.179819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.179834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.188081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.188199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.188214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.197000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.197118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.197134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.205889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.206005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.206020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.214909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.215027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.215042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.223818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.223936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.189 [2024-11-20 14:48:20.223951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.189 [2024-11-20 14:48:20.232737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.189 [2024-11-20 14:48:20.232856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.190 [2024-11-20 14:48:20.232872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.190 [2024-11-20 14:48:20.241654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.190 [2024-11-20 14:48:20.241771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.190 [2024-11-20 14:48:20.241787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.250568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.250684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.250700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.259482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.259599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.259614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.268384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.268502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.268518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.277293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.277415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.277430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.286208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.286334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.286350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.295236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.295360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.295375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.304148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.304270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.304289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.313090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.313212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.313228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.322026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.322145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.322160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.330948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.331068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.331083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.339881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.339998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.340014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.450 [2024-11-20 14:48:20.348788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.450 [2024-11-20 14:48:20.348907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.450 [2024-11-20 14:48:20.348923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.357705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.357822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.357837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.366610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.366729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.366744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.375524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.375642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.375657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.384444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.384567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.384583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.393365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.393483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.393499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.402264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.402381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.402396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.411167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.411292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.411308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.420091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.420210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.420226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.429052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.429169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.429185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.437973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.438090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.438106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.446872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.446990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.447006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.455776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.455894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.455910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.464689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.464809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.464824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.473602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.473720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.473735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.482521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.482640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.482656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.491445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.491563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.491578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.500360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.500479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.500495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.451 [2024-11-20 14:48:20.509275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.451 [2024-11-20 14:48:20.509393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.451 [2024-11-20 14:48:20.509408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.518208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.518330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.518346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.527159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.527283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.527299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.536076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.536194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.536213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.544993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.545111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.545128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.553907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.554026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.554041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.562825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.562944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.562960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.571751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.571869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.571885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.580641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.580758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.580774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.589559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.589675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.589690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.598450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.598569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.598585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.607374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.607492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.607508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.616272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.616396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.616411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.625215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.625340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.625357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.634115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.634232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.634252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.643029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.643147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.643162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.651929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.652048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.652064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.660848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.660966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.660982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.669774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.669891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.669907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.678668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.678788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.678803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.687602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.687719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.712 [2024-11-20 14:48:20.687734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.712 [2024-11-20 14:48:20.696690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.712 [2024-11-20 14:48:20.696812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.713 [2024-11-20 14:48:20.696828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.713 [2024-11-20 14:48:20.705619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.713 [2024-11-20 14:48:20.705738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.713 [2024-11-20 14:48:20.705755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.713 [2024-11-20 14:48:20.714559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.713 [2024-11-20 14:48:20.714680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.713 [2024-11-20 14:48:20.714696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.713 [2024-11-20 14:48:20.723481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.713 [2024-11-20 14:48:20.723600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.713 [2024-11-20 14:48:20.723615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.713 [2024-11-20 14:48:20.732407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.713 [2024-11-20 14:48:20.732526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.713 [2024-11-20 14:48:20.732541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.713 [2024-11-20 14:48:20.741309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.713 [2024-11-20 14:48:20.741428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.713 [2024-11-20 14:48:20.741443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.713 [2024-11-20 14:48:20.750267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.713 [2024-11-20 14:48:20.750386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.713 [2024-11-20 14:48:20.750402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.713 [2024-11-20 14:48:20.759186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.713 [2024-11-20 14:48:20.759311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.713 [2024-11-20 14:48:20.759327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.713 [2024-11-20 14:48:20.768105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.713 [2024-11-20 14:48:20.768221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.713 [2024-11-20 14:48:20.768239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.777015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.777133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.777148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.785920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.786038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.786054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.794821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.794939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.794955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.803727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.803845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.803861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.812655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.812775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.812790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.821579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.821695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.821710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.830480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.830597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.830612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.839380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.839498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.839513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.848300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.848419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.848437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.857219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.857343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.857358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.866136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.866254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.866269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.875038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.875156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.875172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.883946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.884066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.884081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.892856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.892973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.892988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.901790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.901908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.901924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.910707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.910823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.910838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.919606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.919723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.919738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.928509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.928629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.928645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.937435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.937554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.937569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.946349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.946468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.946484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.955257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.955374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.955389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.964166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.964291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.964306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.973057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.973175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.973190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.981943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.982061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.982076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.990875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.990992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.974 [2024-11-20 14:48:20.991008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.974 [2024-11-20 14:48:20.999799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.974 [2024-11-20 14:48:20.999918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.975 [2024-11-20 14:48:20.999933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.975 [2024-11-20 14:48:21.008745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.975 [2024-11-20 14:48:21.008863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.975 [2024-11-20 14:48:21.008879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.975 [2024-11-20 14:48:21.017629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.975 [2024-11-20 14:48:21.017747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.975 [2024-11-20 14:48:21.017763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.975 [2024-11-20 14:48:21.026545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:13.975 [2024-11-20 14:48:21.026663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.975 [2024-11-20 14:48:21.026678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.235 [2024-11-20 14:48:21.035449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.035565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.035581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.044374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.044493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.044509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.053284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.053402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.053418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.062190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.062314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.062329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.071095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.071213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.071228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.080006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.080123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.080141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.088932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.089050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.089066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.097851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.097969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.097985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.106764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.106882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.106897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.115660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.115779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.115794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.124561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.124679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.124694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.133465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.133584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.133600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.142377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.142493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.142509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.151311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.151432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.151447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.160206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.160333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.160349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.169118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.169234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.169253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 [2024-11-20 14:48:21.178017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebf9d0) with pdu=0x200016efac10 00:28:14.236 [2024-11-20 14:48:21.178133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:14.236 [2024-11-20 14:48:21.178149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:14.236 28843.50 IOPS, 112.67 MiB/s 00:28:14.236 Latency(us) 00:28:14.236 [2024-11-20T13:48:21.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.236 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:14.236 nvme0n1 : 2.01 28844.24 112.67 0.00 0.00 4429.56 2034.35 10813.44 00:28:14.236 [2024-11-20T13:48:21.296Z] =================================================================================================================== 00:28:14.236 [2024-11-20T13:48:21.296Z] Total : 28844.24 112.67 0.00 0.00 4429.56 2034.35 10813.44 00:28:14.236 { 00:28:14.236 "results": [ 00:28:14.236 { 00:28:14.236 "job": "nvme0n1", 00:28:14.236 "core_mask": "0x2", 00:28:14.236 "workload": "randwrite", 00:28:14.236 "status": "finished", 00:28:14.236 "queue_depth": 128, 00:28:14.236 "io_size": 4096, 00:28:14.236 "runtime": 2.005496, 00:28:14.236 "iops": 28844.236039363826, 00:28:14.236 "mibps": 112.67279702876495, 00:28:14.236 "io_failed": 0, 00:28:14.236 "io_timeout": 0, 00:28:14.236 "avg_latency_us": 4429.562448989, 00:28:14.236 "min_latency_us": 2034.3466666666666, 00:28:14.236 "max_latency_us": 10813.44 00:28:14.236 } 00:28:14.236 ], 00:28:14.236 "core_count": 1 00:28:14.236 } 00:28:14.236 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:14.236 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:14.236 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:14.236 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:14.236 | .driver_specific 00:28:14.236 | .nvme_error 00:28:14.236 | .status_code 00:28:14.236 | .command_transient_transport_error' 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 226 > 0 )) 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4085946 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4085946 ']' 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4085946 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4085946 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4085946' 00:28:14.496 killing process with pid 4085946 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4085946 00:28:14.496 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.496 00:28:14.496 Latency(us) 00:28:14.496 [2024-11-20T13:48:21.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.496 [2024-11-20T13:48:21.556Z] =================================================================================================================== 00:28:14.496 [2024-11-20T13:48:21.556Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4085946 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4086624 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4086624 /var/tmp/bperf.sock 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4086624 ']' 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:14.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.496 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:14.496 [2024-11-20 14:48:21.546275] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:14.496 [2024-11-20 14:48:21.546332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4086624 ] 00:28:14.496 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.496 Zero copy mechanism will not be used. 00:28:14.756 [2024-11-20 14:48:21.610933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.756 [2024-11-20 14:48:21.640148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.756 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.756 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:14.756 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:14.756 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:15.016 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:15.016 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.016 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.016 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.016 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.016 14:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.276 nvme0n1 00:28:15.276 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:15.276 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.276 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.276 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.276 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:15.276 14:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:15.276 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:15.276 Zero copy mechanism will not be used. 00:28:15.276 Running I/O for 2 seconds... 00:28:15.276 [2024-11-20 14:48:22.220454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.276 [2024-11-20 14:48:22.220661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.276 [2024-11-20 14:48:22.220687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.276 [2024-11-20 14:48:22.226557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.276 [2024-11-20 14:48:22.226614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.276 [2024-11-20 14:48:22.226633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.276 [2024-11-20 14:48:22.232712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.276 [2024-11-20 14:48:22.232947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.276 [2024-11-20 14:48:22.232966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.276 [2024-11-20 14:48:22.243037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.276 [2024-11-20 14:48:22.243262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.276 [2024-11-20 14:48:22.243280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.276 [2024-11-20 14:48:22.250220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.276 [2024-11-20 14:48:22.250296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.276 [2024-11-20 14:48:22.250312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.276 [2024-11-20 14:48:22.257883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.276 [2024-11-20 14:48:22.258280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.276 [2024-11-20 14:48:22.258298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.276 [2024-11-20 14:48:22.267490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.276 [2024-11-20 14:48:22.267743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.276 [2024-11-20 14:48:22.267760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.276 [2024-11-20 14:48:22.276608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.276 [2024-11-20 14:48:22.276834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.277 [2024-11-20 14:48:22.276851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.277 [2024-11-20 14:48:22.285540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.277 [2024-11-20 14:48:22.285698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.277 [2024-11-20 14:48:22.285715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.277 [2024-11-20 14:48:22.294682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.277 [2024-11-20 14:48:22.294897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.277 [2024-11-20 14:48:22.294913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.277 [2024-11-20 14:48:22.302699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.277 [2024-11-20 14:48:22.302914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.277 [2024-11-20 14:48:22.302931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.277 [2024-11-20 14:48:22.312104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.277 [2024-11-20 14:48:22.312173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.277 [2024-11-20 14:48:22.312189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.277 [2024-11-20 14:48:22.317427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.277 [2024-11-20 14:48:22.317623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.277 [2024-11-20 14:48:22.317642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.277 [2024-11-20 14:48:22.323234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.277 [2024-11-20 14:48:22.323422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.277 [2024-11-20 14:48:22.323441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.277 [2024-11-20 14:48:22.328169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.277 [2024-11-20 14:48:22.328354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.277 [2024-11-20 14:48:22.328371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.277 [2024-11-20 14:48:22.332653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.277 [2024-11-20 14:48:22.332850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.277 [2024-11-20 14:48:22.332867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.339597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.339805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.339823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.345159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.345294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.345311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.352128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.352324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.352340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.356960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.357098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.357114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.360826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.361031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.361048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.365680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.365881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.365898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.371701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.371893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.371911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.376066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.376261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.376278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.383150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.383343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.383359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.388469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.388672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.388688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.393560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.393758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.393775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.400768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.400939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.400955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.537 [2024-11-20 14:48:22.406853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.537 [2024-11-20 14:48:22.406953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.537 [2024-11-20 14:48:22.406970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.415812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.416008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.416025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.425369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.425548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.425564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.432418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.432621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.432637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.439856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.440252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.440269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.447369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.447670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.447688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.456889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.457079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.457096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.465249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.465459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.465476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.473853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.474021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.474038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.478277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.478491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.478509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.487363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.487546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.487562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.496555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.496732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.496752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.505167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.505329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.505345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.511201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.511427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.511444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.519149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.519285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.519301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.528544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.528800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.528817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.536620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.536825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.536841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.542978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.543017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.543032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.547748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.547798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.547814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.550173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.550229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.550257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.556532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.556700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.556716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.565354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.565548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.565563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.574526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.574717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.574733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.583501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.583686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.583701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.538 [2024-11-20 14:48:22.592833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.538 [2024-11-20 14:48:22.592873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.538 [2024-11-20 14:48:22.592888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.799 [2024-11-20 14:48:22.601933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.799 [2024-11-20 14:48:22.602133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.799 [2024-11-20 14:48:22.602149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.799 [2024-11-20 14:48:22.610039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.799 [2024-11-20 14:48:22.610156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.799 [2024-11-20 14:48:22.610172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.799 [2024-11-20 14:48:22.616710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.799 [2024-11-20 14:48:22.616910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.799 [2024-11-20 14:48:22.616925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.799 [2024-11-20 14:48:22.626175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.799 [2024-11-20 14:48:22.626442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.799 [2024-11-20 14:48:22.626459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.799 [2024-11-20 14:48:22.635494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.799 [2024-11-20 14:48:22.635718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.799 [2024-11-20 14:48:22.635734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.799 [2024-11-20 14:48:22.645425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.799 [2024-11-20 14:48:22.645588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.799 [2024-11-20 14:48:22.645604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.799 [2024-11-20 14:48:22.655061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.799 [2024-11-20 14:48:22.655263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.799 [2024-11-20 14:48:22.655280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.799 [2024-11-20 14:48:22.664385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.799 [2024-11-20 14:48:22.664580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.799 [2024-11-20 14:48:22.664596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.799 [2024-11-20 14:48:22.673740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.673934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.673949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.683192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.683391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.683407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.691496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.691551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.691567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.700582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.700787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.700803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.710335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.710605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.710624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.716970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.717192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.717208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.725763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.725968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.725984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.734429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.734661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.734677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.743621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.743797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.743813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.752172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.752384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.752400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.760585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.760848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.760874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.769305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.769524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.769540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.778048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.778252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.778268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.787714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.787873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.787889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.796771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.796977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.796993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.806446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.806586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.806602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.815556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.815727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.815743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.825074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.825277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.825293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.834742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.834927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.834943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.844692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.844882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.844897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.800 [2024-11-20 14:48:22.854350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:15.800 [2024-11-20 14:48:22.854574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.800 [2024-11-20 14:48:22.854590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.863746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.863959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.863975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.873127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.873300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.873316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.883183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.883408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.883424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.893020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.893225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.893240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.902419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.902608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.902623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.910343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.910406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.910422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.918724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.919055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.919071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.922843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.922888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.922904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.925377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.925430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.925445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.930492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.930658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.930678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.939375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.939568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.939584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.948693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.948882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.948897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.957338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.957511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.957527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.966615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.966789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.061 [2024-11-20 14:48:22.966805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.061 [2024-11-20 14:48:22.973359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.061 [2024-11-20 14:48:22.973593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:22.973610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:22.981953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:22.982201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:22.982217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:22.990463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:22.990643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:22.990659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:22.999401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:22.999605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:22.999621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.007736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.007911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.007927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.016476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.016667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.016683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.024954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.025194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.025212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.033291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.033512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.033528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.041117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.041295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.041311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.050252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.050413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.050429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.058575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.058617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.058633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.065531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.065678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.065693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.072646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.072712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.072727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.079423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.079599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.079615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.086872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.087083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.087099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.095467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.095657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.095673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.104159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.104420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.104437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.062 [2024-11-20 14:48:23.111450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.062 [2024-11-20 14:48:23.111658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.062 [2024-11-20 14:48:23.111673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.121155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.121372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.121388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.130839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.131072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.131087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.140215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.140426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.140442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.149455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.149622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.149640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.157764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.157972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.157988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.166653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.166837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.166852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.175532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.175703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.175719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.184325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.184572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.184588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.192734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.192944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.192960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.323 [2024-11-20 14:48:23.201353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.323 [2024-11-20 14:48:23.201550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.323 [2024-11-20 14:48:23.201566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.209226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.209475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.209492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.218295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.219267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.219284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.324 3874.00 IOPS, 484.25 MiB/s [2024-11-20T13:48:23.384Z] [2024-11-20 14:48:23.227620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.227822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.227839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.237385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.237610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.237625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.247540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.247769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.247784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.256813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.257047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.257063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.261148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.261187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.261203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.266223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.266437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.266454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.275055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.275255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.275271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.281908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.281980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.281995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.290791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.290979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.290995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.300050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.300173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.300188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.308344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.308403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.308418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.316827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.316867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.316883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.323802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.324150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.324167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.333238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.333472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.333488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.340675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.340715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.340731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.346459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.346499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.346514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.350890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.350931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.350947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.357859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.358064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.358084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.366751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.366949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.366965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.324 [2024-11-20 14:48:23.375974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.324 [2024-11-20 14:48:23.376251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.324 [2024-11-20 14:48:23.376267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.383736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.383776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.383792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.390113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.390167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.390183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.394945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.395189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.395206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.401874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.401918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.401934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.409414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.409609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.409624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.418803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.419080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.419096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.426562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.426740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.426755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.432113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.432330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.432346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.440190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.440376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.440392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.449192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.449400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.449415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.457268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.457439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.457455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.465104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.465239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.465260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.473724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.473922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.473939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.482270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.586 [2024-11-20 14:48:23.482468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.586 [2024-11-20 14:48:23.482484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.586 [2024-11-20 14:48:23.490898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.491095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.491110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.499443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.499638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.499654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.507564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.507753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.507770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.515233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.515282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.515297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.521976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.522018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.522034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.529050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.529090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.529106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.534976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.535020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.535035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.543581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.543793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.543809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.551399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.551650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.551667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.560255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.560489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.560510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.569010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.569214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.569230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.577383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.577626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.577641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.584610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.584784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.584800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.590377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.590599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.590616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.599354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.599554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.599570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.607059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.607302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.607320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.613049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.613099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.613115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.616965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.617141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.617156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.622913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.622956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.622974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.625376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.625444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.625460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.628782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.629020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.629036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.587 [2024-11-20 14:48:23.638113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.587 [2024-11-20 14:48:23.638326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.587 [2024-11-20 14:48:23.638342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.848 [2024-11-20 14:48:23.647681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.848 [2024-11-20 14:48:23.647835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.848 [2024-11-20 14:48:23.647851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.848 [2024-11-20 14:48:23.657646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.848 [2024-11-20 14:48:23.657824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.848 [2024-11-20 14:48:23.657840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.848 [2024-11-20 14:48:23.666954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.848 [2024-11-20 14:48:23.667146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.848 [2024-11-20 14:48:23.667162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.848 [2024-11-20 14:48:23.674010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.848 [2024-11-20 14:48:23.674200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.848 [2024-11-20 14:48:23.674216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.848 [2024-11-20 14:48:23.682953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.848 [2024-11-20 14:48:23.683015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.848 [2024-11-20 14:48:23.683030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.848 [2024-11-20 14:48:23.692478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.848 [2024-11-20 14:48:23.692649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.848 [2024-11-20 14:48:23.692665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.848 [2024-11-20 14:48:23.701562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.848 [2024-11-20 14:48:23.701775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.848 [2024-11-20 14:48:23.701791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.710197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.710405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.710421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.719593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.719797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.719814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.729031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.729239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.729260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.737789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.737944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.737959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.746516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.746751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.746767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.755822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.756019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.756035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.764604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.764778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.764795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.772491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.772688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.772704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.781065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.781266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.781282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.789312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.789355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.789371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.797620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.797817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.797832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.805572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.805611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.805626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.813850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.814032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.814048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.823004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.823206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.823221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.831918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.832110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.832126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.839743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.840005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.840025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.846549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.846753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.846770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.853625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.853994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.854010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.862511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.862693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.862709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.870578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.870776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.870792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.879156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.879334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.879350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.887806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.888032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.888048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.896222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.896270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.896285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.849 [2024-11-20 14:48:23.904732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:16.849 [2024-11-20 14:48:23.904979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.849 [2024-11-20 14:48:23.904997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.110 [2024-11-20 14:48:23.912960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.110 [2024-11-20 14:48:23.913202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.110 [2024-11-20 14:48:23.913218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.110 [2024-11-20 14:48:23.921178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.110 [2024-11-20 14:48:23.921411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.110 [2024-11-20 14:48:23.921427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.110 [2024-11-20 14:48:23.929501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.110 [2024-11-20 14:48:23.929683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.110 [2024-11-20 14:48:23.929699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.110 [2024-11-20 14:48:23.939133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.110 [2024-11-20 14:48:23.939296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.110 [2024-11-20 14:48:23.939312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.110 [2024-11-20 14:48:23.947745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.110 [2024-11-20 14:48:23.947964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.110 [2024-11-20 14:48:23.947979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.110 [2024-11-20 14:48:23.956834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.110 [2024-11-20 14:48:23.956985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.110 [2024-11-20 14:48:23.957001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.110 [2024-11-20 14:48:23.966606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.110 [2024-11-20 14:48:23.966807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.110 [2024-11-20 14:48:23.966823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.110 [2024-11-20 14:48:23.976213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.110 [2024-11-20 14:48:23.976345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.110 [2024-11-20 14:48:23.976361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.110 [2024-11-20 14:48:23.985315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.110 [2024-11-20 14:48:23.985477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:23.985493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:23.994235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:23.994464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:23.994480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.004565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.004744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.004760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.013373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.013606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.013622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.022000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.022041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.022057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.029622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.029881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.029899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.038256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.038512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.038529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.046631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.046832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.046848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.055571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.055785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.055801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.063870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.064060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.064079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.072596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.072836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.072851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.081705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.081907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.081922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.090227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.090454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.090470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.098512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.098725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.098741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.107450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.107623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.107638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.114831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.114871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.114888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.117779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.117823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.117839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.120166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.120209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.120225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.122691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.122778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.122793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.125690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.125895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.125911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.136496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.136670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.136686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.145949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.146211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.146227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.155958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.156177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.156193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.111 [2024-11-20 14:48:24.166980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.111 [2024-11-20 14:48:24.167170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.111 [2024-11-20 14:48:24.167186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.372 [2024-11-20 14:48:24.176622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.372 [2024-11-20 14:48:24.176846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.372 [2024-11-20 14:48:24.176861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.372 [2024-11-20 14:48:24.186836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.372 [2024-11-20 14:48:24.186992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.372 [2024-11-20 14:48:24.187008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.372 [2024-11-20 14:48:24.197077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.372 [2024-11-20 14:48:24.197330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.372 [2024-11-20 14:48:24.197346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:17.372 [2024-11-20 14:48:24.206242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.372 [2024-11-20 14:48:24.206467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.372 [2024-11-20 14:48:24.206483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:17.372 [2024-11-20 14:48:24.216259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.372 [2024-11-20 14:48:24.216497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.372 [2024-11-20 14:48:24.216523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:17.372 3867.00 IOPS, 483.38 MiB/s [2024-11-20T13:48:24.432Z] [2024-11-20 14:48:24.225560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xebff00) with pdu=0x200016eff3c8 00:28:17.372 [2024-11-20 14:48:24.225832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.372 [2024-11-20 14:48:24.225850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:17.372 00:28:17.372 Latency(us) 00:28:17.372 [2024-11-20T13:48:24.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.372 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:17.372 nvme0n1 : 2.01 3865.06 483.13 0.00 0.00 4132.34 1146.88 10813.44 00:28:17.372 [2024-11-20T13:48:24.432Z] =================================================================================================================== 00:28:17.372 [2024-11-20T13:48:24.432Z] Total : 3865.06 483.13 0.00 0.00 4132.34 1146.88 10813.44 00:28:17.372 { 00:28:17.372 "results": [ 00:28:17.372 { 00:28:17.372 "job": "nvme0n1", 00:28:17.372 "core_mask": "0x2", 00:28:17.372 "workload": "randwrite", 00:28:17.372 "status": "finished", 00:28:17.372 "queue_depth": 16, 00:28:17.372 "io_size": 131072, 00:28:17.372 "runtime": 2.006176, 00:28:17.372 "iops": 3865.0646802673346, 00:28:17.372 "mibps": 483.1330850334168, 00:28:17.372 "io_failed": 0, 00:28:17.372 "io_timeout": 0, 00:28:17.372 "avg_latency_us": 4132.342271515777, 00:28:17.372 "min_latency_us": 1146.88, 00:28:17.372 "max_latency_us": 10813.44 00:28:17.372 } 00:28:17.372 ], 00:28:17.372 "core_count": 1 00:28:17.372 } 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:17.372 | .driver_specific 00:28:17.372 | .nvme_error 00:28:17.372 | .status_code 00:28:17.372 | .command_transient_transport_error' 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 251 > 0 )) 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4086624 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4086624 ']' 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4086624 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.372 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4086624 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4086624' 00:28:17.631 killing process with pid 4086624 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4086624 00:28:17.631 Received shutdown signal, test time was about 2.000000 seconds 00:28:17.631 00:28:17.631 Latency(us) 00:28:17.631 [2024-11-20T13:48:24.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.631 [2024-11-20T13:48:24.691Z] =================================================================================================================== 00:28:17.631 [2024-11-20T13:48:24.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4086624 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4084568 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4084568 ']' 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4084568 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4084568 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4084568' 00:28:17.631 killing process with pid 4084568 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4084568 00:28:17.631 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4084568 00:28:17.890 00:28:17.890 real 0m12.542s 00:28:17.890 user 0m24.957s 00:28:17.890 sys 0m2.757s 00:28:17.890 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.890 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.890 ************************************ 00:28:17.890 END TEST nvmf_digest_error 00:28:17.890 ************************************ 00:28:17.890 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:17.890 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:17.890 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:17.890 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:17.890 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:17.891 rmmod nvme_tcp 00:28:17.891 rmmod nvme_fabrics 00:28:17.891 rmmod nvme_keyring 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 4084568 ']' 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 4084568 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 4084568 ']' 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 4084568 00:28:17.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4084568) - No such process 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 4084568 is not found' 00:28:17.891 Process with pid 4084568 is not found 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.891 14:48:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:19.796 00:28:19.796 real 0m34.500s 00:28:19.796 user 0m54.002s 00:28:19.796 sys 0m9.977s 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:19.796 ************************************ 00:28:19.796 END TEST nvmf_digest 00:28:19.796 ************************************ 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.796 14:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.055 ************************************ 00:28:20.055 START TEST nvmf_bdevperf 00:28:20.055 ************************************ 00:28:20.055 14:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:20.055 * Looking for test storage... 00:28:20.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:20.055 14:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:20.055 14:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:20.055 14:48:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:20.055 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:20.055 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.055 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.055 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.055 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.055 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.055 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.055 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:20.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.056 --rc genhtml_branch_coverage=1 00:28:20.056 --rc genhtml_function_coverage=1 00:28:20.056 --rc genhtml_legend=1 00:28:20.056 --rc geninfo_all_blocks=1 00:28:20.056 --rc geninfo_unexecuted_blocks=1 00:28:20.056 00:28:20.056 ' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:20.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.056 --rc genhtml_branch_coverage=1 00:28:20.056 --rc genhtml_function_coverage=1 00:28:20.056 --rc genhtml_legend=1 00:28:20.056 --rc geninfo_all_blocks=1 00:28:20.056 --rc geninfo_unexecuted_blocks=1 00:28:20.056 00:28:20.056 ' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:20.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.056 --rc genhtml_branch_coverage=1 00:28:20.056 --rc genhtml_function_coverage=1 00:28:20.056 --rc genhtml_legend=1 00:28:20.056 --rc geninfo_all_blocks=1 00:28:20.056 --rc geninfo_unexecuted_blocks=1 00:28:20.056 00:28:20.056 ' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:20.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.056 --rc genhtml_branch_coverage=1 00:28:20.056 --rc genhtml_function_coverage=1 00:28:20.056 --rc genhtml_legend=1 00:28:20.056 --rc geninfo_all_blocks=1 00:28:20.056 --rc geninfo_unexecuted_blocks=1 00:28:20.056 00:28:20.056 ' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.056 14:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:25.329 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:25.329 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:25.329 Found net devices under 0000:31:00.0: cvl_0_0 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:25.329 Found net devices under 0000:31:00.1: cvl_0_1 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:25.329 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.588 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.588 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.588 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:25.588 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:25.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:28:25.588 00:28:25.588 --- 10.0.0.2 ping statistics --- 00:28:25.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.588 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:28:25.588 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:28:25.588 00:28:25.588 --- 10.0.0.1 ping statistics --- 00:28:25.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.588 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:28:25.588 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.588 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:25.588 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:25.588 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4091649 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4091649 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4091649 ']' 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.589 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.589 [2024-11-20 14:48:32.478012] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:25.589 [2024-11-20 14:48:32.478058] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.589 [2024-11-20 14:48:32.551605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:25.589 [2024-11-20 14:48:32.581228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.589 [2024-11-20 14:48:32.581262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.589 [2024-11-20 14:48:32.581269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.589 [2024-11-20 14:48:32.581274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.589 [2024-11-20 14:48:32.581278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.589 [2024-11-20 14:48:32.582430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.589 [2024-11-20 14:48:32.582547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.589 [2024-11-20 14:48:32.582553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.848 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.848 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:25.848 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.848 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.848 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.849 [2024-11-20 14:48:32.681985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.849 Malloc0 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.849 [2024-11-20 14:48:32.730176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.849 { 00:28:25.849 "params": { 00:28:25.849 "name": "Nvme$subsystem", 00:28:25.849 "trtype": "$TEST_TRANSPORT", 00:28:25.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.849 "adrfam": "ipv4", 00:28:25.849 "trsvcid": "$NVMF_PORT", 00:28:25.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.849 "hdgst": ${hdgst:-false}, 00:28:25.849 "ddgst": ${ddgst:-false} 00:28:25.849 }, 00:28:25.849 "method": "bdev_nvme_attach_controller" 00:28:25.849 } 00:28:25.849 EOF 00:28:25.849 )") 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:25.849 14:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:25.849 "params": { 00:28:25.849 "name": "Nvme1", 00:28:25.849 "trtype": "tcp", 00:28:25.849 "traddr": "10.0.0.2", 00:28:25.849 "adrfam": "ipv4", 00:28:25.849 "trsvcid": "4420", 00:28:25.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:25.849 "hdgst": false, 00:28:25.849 "ddgst": false 00:28:25.849 }, 00:28:25.849 "method": "bdev_nvme_attach_controller" 00:28:25.849 }' 00:28:25.849 [2024-11-20 14:48:32.768137] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:25.849 [2024-11-20 14:48:32.768187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091677 ] 00:28:25.849 [2024-11-20 14:48:32.845067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.849 [2024-11-20 14:48:32.881406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.108 Running I/O for 1 seconds... 00:28:27.044 11185.00 IOPS, 43.69 MiB/s 00:28:27.044 Latency(us) 00:28:27.044 [2024-11-20T13:48:34.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.044 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:27.044 Verification LBA range: start 0x0 length 0x4000 00:28:27.044 Nvme1n1 : 1.01 11219.44 43.83 0.00 0.00 11354.28 2676.05 13434.88 00:28:27.044 [2024-11-20T13:48:34.104Z] =================================================================================================================== 00:28:27.044 [2024-11-20T13:48:34.104Z] Total : 11219.44 43.83 0.00 0.00 11354.28 2676.05 13434.88 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4092013 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.303 { 00:28:27.303 "params": { 00:28:27.303 "name": "Nvme$subsystem", 00:28:27.303 "trtype": "$TEST_TRANSPORT", 00:28:27.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.303 "adrfam": "ipv4", 00:28:27.303 "trsvcid": "$NVMF_PORT", 00:28:27.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.303 "hdgst": ${hdgst:-false}, 00:28:27.303 "ddgst": ${ddgst:-false} 00:28:27.303 }, 00:28:27.303 "method": "bdev_nvme_attach_controller" 00:28:27.303 } 00:28:27.303 EOF 00:28:27.303 )") 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:27.303 14:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:27.303 "params": { 00:28:27.303 "name": "Nvme1", 00:28:27.303 "trtype": "tcp", 00:28:27.303 "traddr": "10.0.0.2", 00:28:27.303 "adrfam": "ipv4", 00:28:27.303 "trsvcid": "4420", 00:28:27.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.303 "hdgst": false, 00:28:27.303 "ddgst": false 00:28:27.303 }, 00:28:27.303 "method": "bdev_nvme_attach_controller" 00:28:27.303 }' 00:28:27.303 [2024-11-20 14:48:34.231118] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:27.303 [2024-11-20 14:48:34.231174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092013 ] 00:28:27.303 [2024-11-20 14:48:34.309345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.303 [2024-11-20 14:48:34.343647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.870 Running I/O for 15 seconds... 00:28:29.747 11106.00 IOPS, 43.38 MiB/s [2024-11-20T13:48:37.378Z] 11762.50 IOPS, 45.95 MiB/s [2024-11-20T13:48:37.378Z] 14:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4091649 00:28:30.318 14:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:30.318 [2024-11-20 14:48:37.210961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.318 [2024-11-20 14:48:37.210997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.318 [2024-11-20 14:48:37.211012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.318 [2024-11-20 14:48:37.211019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.318 [2024-11-20 14:48:37.211028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.319 [2024-11-20 14:48:37.211564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.319 [2024-11-20 14:48:37.211569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.211992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.211997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.212004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.212008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.212015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.212020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.212026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.320 [2024-11-20 14:48:37.212032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.320 [2024-11-20 14:48:37.212039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.321 [2024-11-20 14:48:37.212472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.321 [2024-11-20 14:48:37.212604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.321 [2024-11-20 14:48:37.212609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.322 [2024-11-20 14:48:37.212616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.322 [2024-11-20 14:48:37.212620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.322 [2024-11-20 14:48:37.212627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.322 [2024-11-20 14:48:37.212632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.322 [2024-11-20 14:48:37.212639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.322 [2024-11-20 14:48:37.212645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.322 [2024-11-20 14:48:37.212653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.322 [2024-11-20 14:48:37.212658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.322 [2024-11-20 14:48:37.212666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.322 [2024-11-20 14:48:37.212671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.322 [2024-11-20 14:48:37.212678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.322 [2024-11-20 14:48:37.212683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.322 [2024-11-20 14:48:37.212689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22559b0 is same with the state(6) to be set 00:28:30.322 [2024-11-20 14:48:37.212695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.322 [2024-11-20 14:48:37.212699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.322 [2024-11-20 14:48:37.212705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113384 len:8 PRP1 0x0 PRP2 0x0 00:28:30.322 [2024-11-20 14:48:37.212710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.322 [2024-11-20 14:48:37.215233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.322 [2024-11-20 14:48:37.215279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.322 [2024-11-20 14:48:37.215788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.322 [2024-11-20 14:48:37.215802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.322 [2024-11-20 14:48:37.215809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.322 [2024-11-20 14:48:37.215960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.322 [2024-11-20 14:48:37.216111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.322 [2024-11-20 14:48:37.216119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.322 [2024-11-20 14:48:37.216124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.322 [2024-11-20 14:48:37.216131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.322 [2024-11-20 14:48:37.228058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.322 [2024-11-20 14:48:37.228589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.322 [2024-11-20 14:48:37.228621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.322 [2024-11-20 14:48:37.228630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.322 [2024-11-20 14:48:37.228798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.322 [2024-11-20 14:48:37.228951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.322 [2024-11-20 14:48:37.228958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.322 [2024-11-20 14:48:37.228968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.322 [2024-11-20 14:48:37.228974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.322 [2024-11-20 14:48:37.240778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.322 [2024-11-20 14:48:37.241223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.322 [2024-11-20 14:48:37.241260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.322 [2024-11-20 14:48:37.241268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.322 [2024-11-20 14:48:37.241434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.322 [2024-11-20 14:48:37.241587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.322 [2024-11-20 14:48:37.241594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.322 [2024-11-20 14:48:37.241599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.322 [2024-11-20 14:48:37.241606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.322 [2024-11-20 14:48:37.253399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.322 [2024-11-20 14:48:37.253972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.322 [2024-11-20 14:48:37.254003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.322 [2024-11-20 14:48:37.254012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.322 [2024-11-20 14:48:37.254177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.322 [2024-11-20 14:48:37.254337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.322 [2024-11-20 14:48:37.254345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.322 [2024-11-20 14:48:37.254351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.322 [2024-11-20 14:48:37.254357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.322 [2024-11-20 14:48:37.265991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.322 [2024-11-20 14:48:37.266585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.322 [2024-11-20 14:48:37.266616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.322 [2024-11-20 14:48:37.266625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.322 [2024-11-20 14:48:37.266790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.322 [2024-11-20 14:48:37.266943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.322 [2024-11-20 14:48:37.266950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.322 [2024-11-20 14:48:37.266956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.322 [2024-11-20 14:48:37.266961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.322 [2024-11-20 14:48:37.278616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.322 [2024-11-20 14:48:37.279088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.322 [2024-11-20 14:48:37.279104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.322 [2024-11-20 14:48:37.279109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.322 [2024-11-20 14:48:37.279264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.322 [2024-11-20 14:48:37.279414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.322 [2024-11-20 14:48:37.279421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.322 [2024-11-20 14:48:37.279427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.322 [2024-11-20 14:48:37.279432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.322 [2024-11-20 14:48:37.291211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.322 [2024-11-20 14:48:37.291712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.322 [2024-11-20 14:48:37.291727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.322 [2024-11-20 14:48:37.291732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.322 [2024-11-20 14:48:37.291881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.322 [2024-11-20 14:48:37.292032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.322 [2024-11-20 14:48:37.292038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.322 [2024-11-20 14:48:37.292044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.322 [2024-11-20 14:48:37.292049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.322 [2024-11-20 14:48:37.303824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.322 [2024-11-20 14:48:37.304349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.322 [2024-11-20 14:48:37.304380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.322 [2024-11-20 14:48:37.304389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.322 [2024-11-20 14:48:37.304557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.322 [2024-11-20 14:48:37.304709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.322 [2024-11-20 14:48:37.304716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.322 [2024-11-20 14:48:37.304722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.323 [2024-11-20 14:48:37.304728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.323 [2024-11-20 14:48:37.316511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.323 [2024-11-20 14:48:37.317084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.323 [2024-11-20 14:48:37.317115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.323 [2024-11-20 14:48:37.317127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.323 [2024-11-20 14:48:37.317299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.323 [2024-11-20 14:48:37.317453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.323 [2024-11-20 14:48:37.317460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.323 [2024-11-20 14:48:37.317466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.323 [2024-11-20 14:48:37.317472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.323 [2024-11-20 14:48:37.329121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.323 [2024-11-20 14:48:37.329667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.323 [2024-11-20 14:48:37.329699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.323 [2024-11-20 14:48:37.329707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.323 [2024-11-20 14:48:37.329872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.323 [2024-11-20 14:48:37.330025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.323 [2024-11-20 14:48:37.330032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.323 [2024-11-20 14:48:37.330039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.323 [2024-11-20 14:48:37.330045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.323 [2024-11-20 14:48:37.341848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.323 [2024-11-20 14:48:37.342378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.323 [2024-11-20 14:48:37.342409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.323 [2024-11-20 14:48:37.342417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.323 [2024-11-20 14:48:37.342583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.323 [2024-11-20 14:48:37.342736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.323 [2024-11-20 14:48:37.342743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.323 [2024-11-20 14:48:37.342749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.323 [2024-11-20 14:48:37.342755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.323 [2024-11-20 14:48:37.354556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.323 [2024-11-20 14:48:37.355146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.323 [2024-11-20 14:48:37.355177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.323 [2024-11-20 14:48:37.355187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.323 [2024-11-20 14:48:37.355359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.323 [2024-11-20 14:48:37.355516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.323 [2024-11-20 14:48:37.355524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.323 [2024-11-20 14:48:37.355529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.323 [2024-11-20 14:48:37.355535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.323 [2024-11-20 14:48:37.367178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.323 [2024-11-20 14:48:37.367656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.323 [2024-11-20 14:48:37.367671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.323 [2024-11-20 14:48:37.367677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.323 [2024-11-20 14:48:37.367827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.323 [2024-11-20 14:48:37.367976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.323 [2024-11-20 14:48:37.367983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.323 [2024-11-20 14:48:37.367989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.323 [2024-11-20 14:48:37.367994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.594 [2024-11-20 14:48:37.379807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.594 [2024-11-20 14:48:37.380347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-11-20 14:48:37.380379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.594 [2024-11-20 14:48:37.380388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.594 [2024-11-20 14:48:37.380555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.594 [2024-11-20 14:48:37.380708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.594 [2024-11-20 14:48:37.380715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.594 [2024-11-20 14:48:37.380721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.594 [2024-11-20 14:48:37.380728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.594 [2024-11-20 14:48:37.392515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.594 [2024-11-20 14:48:37.393019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-11-20 14:48:37.393034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.594 [2024-11-20 14:48:37.393040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.594 [2024-11-20 14:48:37.393190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.594 [2024-11-20 14:48:37.393347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.594 [2024-11-20 14:48:37.393354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.594 [2024-11-20 14:48:37.393364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.594 [2024-11-20 14:48:37.393369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.594 [2024-11-20 14:48:37.405148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.594 [2024-11-20 14:48:37.405649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-11-20 14:48:37.405662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.594 [2024-11-20 14:48:37.405669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.594 [2024-11-20 14:48:37.405818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.594 [2024-11-20 14:48:37.405968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.594 [2024-11-20 14:48:37.405975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.594 [2024-11-20 14:48:37.405980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.594 [2024-11-20 14:48:37.405985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.594 [2024-11-20 14:48:37.417757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.594 [2024-11-20 14:48:37.418343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.594 [2024-11-20 14:48:37.418375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.594 [2024-11-20 14:48:37.418383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.594 [2024-11-20 14:48:37.418551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.594 [2024-11-20 14:48:37.418704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.594 [2024-11-20 14:48:37.418711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.594 [2024-11-20 14:48:37.418717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.594 [2024-11-20 14:48:37.418723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.594 [2024-11-20 14:48:37.430380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.594 [2024-11-20 14:48:37.430917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.430949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.430957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.431123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.431285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.431293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.431299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.431305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.595 [2024-11-20 14:48:37.443099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.595 [2024-11-20 14:48:37.443716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.443747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.443756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.443921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.444074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.444081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.444087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.444093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.595 [2024-11-20 14:48:37.455813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.595 [2024-11-20 14:48:37.456432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.456464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.456473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.456640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.456793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.456800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.456806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.456812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.595 [2024-11-20 14:48:37.468453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.595 [2024-11-20 14:48:37.469068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.469099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.469108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.469280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.469434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.469441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.469447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.469454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.595 [2024-11-20 14:48:37.481112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.595 [2024-11-20 14:48:37.481602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.481618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.481628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.481779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.481929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.481936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.481941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.481947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.595 [2024-11-20 14:48:37.493741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.595 [2024-11-20 14:48:37.494234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.494252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.494258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.494408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.494559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.494565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.494571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.494576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.595 [2024-11-20 14:48:37.506347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.595 [2024-11-20 14:48:37.506804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.506817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.506823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.506972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.507122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.507128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.507134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.507140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.595 [2024-11-20 14:48:37.519063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.595 [2024-11-20 14:48:37.519505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.519519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.519524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.519673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.519827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.519834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.519840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.519845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.595 [2024-11-20 14:48:37.531770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.595 [2024-11-20 14:48:37.532258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.532272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.532278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.532428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.532577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.532584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.532589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.532594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.595 [2024-11-20 14:48:37.544380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.595 [2024-11-20 14:48:37.544995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.595 [2024-11-20 14:48:37.545027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.595 [2024-11-20 14:48:37.545035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.595 [2024-11-20 14:48:37.545200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.595 [2024-11-20 14:48:37.545361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.595 [2024-11-20 14:48:37.545370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.595 [2024-11-20 14:48:37.545376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.595 [2024-11-20 14:48:37.545382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.596 [2024-11-20 14:48:37.557040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.596 [2024-11-20 14:48:37.557641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-11-20 14:48:37.557673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.596 [2024-11-20 14:48:37.557682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.596 [2024-11-20 14:48:37.557847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.596 [2024-11-20 14:48:37.558000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.596 [2024-11-20 14:48:37.558007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.596 [2024-11-20 14:48:37.558017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.596 [2024-11-20 14:48:37.558022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.596 [2024-11-20 14:48:37.569672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.596 [2024-11-20 14:48:37.570233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-11-20 14:48:37.570270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.596 [2024-11-20 14:48:37.570278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.596 [2024-11-20 14:48:37.570443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.596 [2024-11-20 14:48:37.570596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.596 [2024-11-20 14:48:37.570604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.596 [2024-11-20 14:48:37.570610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.596 [2024-11-20 14:48:37.570615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.596 [2024-11-20 14:48:37.582281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.596 [2024-11-20 14:48:37.582879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-11-20 14:48:37.582910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.596 [2024-11-20 14:48:37.582919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.596 [2024-11-20 14:48:37.583084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.596 [2024-11-20 14:48:37.583237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.596 [2024-11-20 14:48:37.583252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.596 [2024-11-20 14:48:37.583259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.596 [2024-11-20 14:48:37.583264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.596 [2024-11-20 14:48:37.594926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.596 [2024-11-20 14:48:37.595546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-11-20 14:48:37.595577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.596 [2024-11-20 14:48:37.595586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.596 [2024-11-20 14:48:37.595751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.596 [2024-11-20 14:48:37.595905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.596 [2024-11-20 14:48:37.595912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.596 [2024-11-20 14:48:37.595918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.596 [2024-11-20 14:48:37.595924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.596 [2024-11-20 14:48:37.607575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.596 [2024-11-20 14:48:37.608050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-11-20 14:48:37.608065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.596 [2024-11-20 14:48:37.608071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.596 [2024-11-20 14:48:37.608221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.596 [2024-11-20 14:48:37.608375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.596 [2024-11-20 14:48:37.608383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.596 [2024-11-20 14:48:37.608388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.596 [2024-11-20 14:48:37.608393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.596 [2024-11-20 14:48:37.620168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.596 [2024-11-20 14:48:37.620616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-11-20 14:48:37.620631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.596 [2024-11-20 14:48:37.620637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.596 [2024-11-20 14:48:37.620787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.596 [2024-11-20 14:48:37.620937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.596 [2024-11-20 14:48:37.620943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.596 [2024-11-20 14:48:37.620949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.596 [2024-11-20 14:48:37.620953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.596 [2024-11-20 14:48:37.632880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.596 [2024-11-20 14:48:37.633484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-11-20 14:48:37.633515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.596 [2024-11-20 14:48:37.633524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.596 [2024-11-20 14:48:37.633690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.596 [2024-11-20 14:48:37.633842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.596 [2024-11-20 14:48:37.633850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.596 [2024-11-20 14:48:37.633856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.596 [2024-11-20 14:48:37.633862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.596 [2024-11-20 14:48:37.645529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.596 [2024-11-20 14:48:37.646132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.596 [2024-11-20 14:48:37.646163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.596 [2024-11-20 14:48:37.646175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.596 [2024-11-20 14:48:37.646348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.596 [2024-11-20 14:48:37.646503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.596 [2024-11-20 14:48:37.646510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.596 [2024-11-20 14:48:37.646516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.596 [2024-11-20 14:48:37.646522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.858 [2024-11-20 14:48:37.658189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.858 [2024-11-20 14:48:37.658657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.858 [2024-11-20 14:48:37.658673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.858 [2024-11-20 14:48:37.658679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.858 [2024-11-20 14:48:37.658829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.858 [2024-11-20 14:48:37.658982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.858 [2024-11-20 14:48:37.658989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.858 [2024-11-20 14:48:37.658994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.858 [2024-11-20 14:48:37.659000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.858 10148.00 IOPS, 39.64 MiB/s [2024-11-20T13:48:37.918Z] [2024-11-20 14:48:37.670796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.858 [2024-11-20 14:48:37.671289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.858 [2024-11-20 14:48:37.671310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.858 [2024-11-20 14:48:37.671316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.858 [2024-11-20 14:48:37.671471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.858 [2024-11-20 14:48:37.671623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.858 [2024-11-20 14:48:37.671631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.858 [2024-11-20 14:48:37.671636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.858 [2024-11-20 14:48:37.671641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.858 [2024-11-20 14:48:37.683480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.858 [2024-11-20 14:48:37.683982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.858 [2024-11-20 14:48:37.683996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.858 [2024-11-20 14:48:37.684001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.858 [2024-11-20 14:48:37.684155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.858 [2024-11-20 14:48:37.684310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.858 [2024-11-20 14:48:37.684317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.858 [2024-11-20 14:48:37.684323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.858 [2024-11-20 14:48:37.684328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.859 [2024-11-20 14:48:37.696149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.859 [2024-11-20 14:48:37.696637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.859 [2024-11-20 14:48:37.696652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.859 [2024-11-20 14:48:37.696658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.859 [2024-11-20 14:48:37.696808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.859 [2024-11-20 14:48:37.696958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.859 [2024-11-20 14:48:37.696965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.859 [2024-11-20 14:48:37.696970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.859 [2024-11-20 14:48:37.696975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.859 [2024-11-20 14:48:37.708777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.859 [2024-11-20 14:48:37.709257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.859 [2024-11-20 14:48:37.709272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.859 [2024-11-20 14:48:37.709277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.859 [2024-11-20 14:48:37.709427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.859 [2024-11-20 14:48:37.709577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.859 [2024-11-20 14:48:37.709583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.859 [2024-11-20 14:48:37.709589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.859 [2024-11-20 14:48:37.709594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.859 [2024-11-20 14:48:37.721399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.859 [2024-11-20 14:48:37.722261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.859 [2024-11-20 14:48:37.722280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.859 [2024-11-20 14:48:37.722287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.859 [2024-11-20 14:48:37.722442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.859 [2024-11-20 14:48:37.722595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.859 [2024-11-20 14:48:37.722603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.859 [2024-11-20 14:48:37.722613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.859 [2024-11-20 14:48:37.722618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.859 [2024-11-20 14:48:37.733987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.859 [2024-11-20 14:48:37.734441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.859 [2024-11-20 14:48:37.734456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.859 [2024-11-20 14:48:37.734462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.859 [2024-11-20 14:48:37.734612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.859 [2024-11-20 14:48:37.734763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.859 [2024-11-20 14:48:37.734769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.859 [2024-11-20 14:48:37.734774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.859 [2024-11-20 14:48:37.734779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.859 [2024-11-20 14:48:37.746603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.859 [2024-11-20 14:48:37.747073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.859 [2024-11-20 14:48:37.747087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.859 [2024-11-20 14:48:37.747093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.859 [2024-11-20 14:48:37.747242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.859 [2024-11-20 14:48:37.747400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.859 [2024-11-20 14:48:37.747406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.859 [2024-11-20 14:48:37.747411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.859 [2024-11-20 14:48:37.747416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.859 [2024-11-20 14:48:37.759219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.859 [2024-11-20 14:48:37.759719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.859 [2024-11-20 14:48:37.759734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.859 [2024-11-20 14:48:37.759740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.859 [2024-11-20 14:48:37.759889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.859 [2024-11-20 14:48:37.760039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.859 [2024-11-20 14:48:37.760046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.859 [2024-11-20 14:48:37.760051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.859 [2024-11-20 14:48:37.760056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.859 [2024-11-20 14:48:37.771850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.859 [2024-11-20 14:48:37.772334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.859 [2024-11-20 14:48:37.772349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.859 [2024-11-20 14:48:37.772354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.859 [2024-11-20 14:48:37.772504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.859 [2024-11-20 14:48:37.772653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.859 [2024-11-20 14:48:37.772659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.859 [2024-11-20 14:48:37.772665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.859 [2024-11-20 14:48:37.772670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.860 [2024-11-20 14:48:37.784480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.860 [2024-11-20 14:48:37.784940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.860 [2024-11-20 14:48:37.784954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.860 [2024-11-20 14:48:37.784959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.860 [2024-11-20 14:48:37.785109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.860 [2024-11-20 14:48:37.785266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.860 [2024-11-20 14:48:37.785273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.860 [2024-11-20 14:48:37.785279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.860 [2024-11-20 14:48:37.785284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.860 [2024-11-20 14:48:37.797073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.860 [2024-11-20 14:48:37.797565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.860 [2024-11-20 14:48:37.797596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.860 [2024-11-20 14:48:37.797605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.860 [2024-11-20 14:48:37.797770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.860 [2024-11-20 14:48:37.797923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.860 [2024-11-20 14:48:37.797931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.860 [2024-11-20 14:48:37.797936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.860 [2024-11-20 14:48:37.797942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.860 [2024-11-20 14:48:37.809756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.860 [2024-11-20 14:48:37.810373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.860 [2024-11-20 14:48:37.810407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.860 [2024-11-20 14:48:37.810416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.860 [2024-11-20 14:48:37.810583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.860 [2024-11-20 14:48:37.810736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.860 [2024-11-20 14:48:37.810744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.860 [2024-11-20 14:48:37.810749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.860 [2024-11-20 14:48:37.810756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.860 [2024-11-20 14:48:37.822393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.860 [2024-11-20 14:48:37.822847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.860 [2024-11-20 14:48:37.822862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.860 [2024-11-20 14:48:37.822868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.860 [2024-11-20 14:48:37.823018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.860 [2024-11-20 14:48:37.823168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.860 [2024-11-20 14:48:37.823175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.860 [2024-11-20 14:48:37.823180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.860 [2024-11-20 14:48:37.823185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.860 [2024-11-20 14:48:37.834980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.860 [2024-11-20 14:48:37.835454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.860 [2024-11-20 14:48:37.835469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.860 [2024-11-20 14:48:37.835474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.860 [2024-11-20 14:48:37.835624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.860 [2024-11-20 14:48:37.835774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.860 [2024-11-20 14:48:37.835781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.860 [2024-11-20 14:48:37.835786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.860 [2024-11-20 14:48:37.835791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.860 [2024-11-20 14:48:37.847590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.860 [2024-11-20 14:48:37.848078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.860 [2024-11-20 14:48:37.848092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.860 [2024-11-20 14:48:37.848098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.860 [2024-11-20 14:48:37.848256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.860 [2024-11-20 14:48:37.848406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.860 [2024-11-20 14:48:37.848413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.860 [2024-11-20 14:48:37.848418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.860 [2024-11-20 14:48:37.848422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.860 [2024-11-20 14:48:37.860206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.860 [2024-11-20 14:48:37.860697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.860 [2024-11-20 14:48:37.860711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.860 [2024-11-20 14:48:37.860717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.860 [2024-11-20 14:48:37.860866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.860 [2024-11-20 14:48:37.861015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.860 [2024-11-20 14:48:37.861022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.860 [2024-11-20 14:48:37.861027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.861 [2024-11-20 14:48:37.861032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.861 [2024-11-20 14:48:37.872817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.861 [2024-11-20 14:48:37.873467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.861 [2024-11-20 14:48:37.873498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.861 [2024-11-20 14:48:37.873506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.861 [2024-11-20 14:48:37.873672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.861 [2024-11-20 14:48:37.873825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.861 [2024-11-20 14:48:37.873832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.861 [2024-11-20 14:48:37.873838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.861 [2024-11-20 14:48:37.873845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.861 [2024-11-20 14:48:37.885499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.861 [2024-11-20 14:48:37.886092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.861 [2024-11-20 14:48:37.886123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.861 [2024-11-20 14:48:37.886132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.861 [2024-11-20 14:48:37.886304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.861 [2024-11-20 14:48:37.886457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.861 [2024-11-20 14:48:37.886464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.861 [2024-11-20 14:48:37.886474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.861 [2024-11-20 14:48:37.886480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.861 [2024-11-20 14:48:37.898138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.861 [2024-11-20 14:48:37.898639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.861 [2024-11-20 14:48:37.898655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.861 [2024-11-20 14:48:37.898661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.861 [2024-11-20 14:48:37.898811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.861 [2024-11-20 14:48:37.898961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.861 [2024-11-20 14:48:37.898968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.861 [2024-11-20 14:48:37.898973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.861 [2024-11-20 14:48:37.898979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:30.861 [2024-11-20 14:48:37.910780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:30.861 [2024-11-20 14:48:37.911271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.861 [2024-11-20 14:48:37.911286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:30.861 [2024-11-20 14:48:37.911292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:30.861 [2024-11-20 14:48:37.911441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:30.861 [2024-11-20 14:48:37.911592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:30.861 [2024-11-20 14:48:37.911599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:30.861 [2024-11-20 14:48:37.911604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:30.861 [2024-11-20 14:48:37.911608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.123 [2024-11-20 14:48:37.923381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.123 [2024-11-20 14:48:37.923828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.123 [2024-11-20 14:48:37.923842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.123 [2024-11-20 14:48:37.923847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.123 [2024-11-20 14:48:37.923996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.123 [2024-11-20 14:48:37.924146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.123 [2024-11-20 14:48:37.924153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.123 [2024-11-20 14:48:37.924158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.123 [2024-11-20 14:48:37.924163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.123 [2024-11-20 14:48:37.936087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.123 [2024-11-20 14:48:37.936545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.123 [2024-11-20 14:48:37.936558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.123 [2024-11-20 14:48:37.936564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.123 [2024-11-20 14:48:37.936713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.123 [2024-11-20 14:48:37.936863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.123 [2024-11-20 14:48:37.936870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.123 [2024-11-20 14:48:37.936876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.123 [2024-11-20 14:48:37.936881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.123 [2024-11-20 14:48:37.948695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.123 [2024-11-20 14:48:37.949286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.123 [2024-11-20 14:48:37.949318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.123 [2024-11-20 14:48:37.949327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.123 [2024-11-20 14:48:37.949492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.123 [2024-11-20 14:48:37.949645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.123 [2024-11-20 14:48:37.949652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.123 [2024-11-20 14:48:37.949658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:37.949664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:37.961338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:37.961903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:37.961934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:37.961942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:37.962108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.124 [2024-11-20 14:48:37.962269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.124 [2024-11-20 14:48:37.962276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.124 [2024-11-20 14:48:37.962282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:37.962288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:37.973936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:37.974402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:37.974422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:37.974428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:37.974578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.124 [2024-11-20 14:48:37.974728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.124 [2024-11-20 14:48:37.974735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.124 [2024-11-20 14:48:37.974740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:37.974745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:37.986527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:37.987092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:37.987124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:37.987133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:37.987304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.124 [2024-11-20 14:48:37.987458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.124 [2024-11-20 14:48:37.987465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.124 [2024-11-20 14:48:37.987470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:37.987476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:37.999137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:37.999637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:37.999653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:37.999659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:37.999809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.124 [2024-11-20 14:48:37.999959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.124 [2024-11-20 14:48:37.999966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.124 [2024-11-20 14:48:37.999971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:37.999976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:38.011762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:38.012262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:38.012277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:38.012282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:38.012436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.124 [2024-11-20 14:48:38.012586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.124 [2024-11-20 14:48:38.012593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.124 [2024-11-20 14:48:38.012599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:38.012604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:38.024568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:38.025017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:38.025030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:38.025036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:38.025186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.124 [2024-11-20 14:48:38.025342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.124 [2024-11-20 14:48:38.025349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.124 [2024-11-20 14:48:38.025355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:38.025360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:38.037306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:38.037762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:38.037776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:38.037782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:38.037931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.124 [2024-11-20 14:48:38.038081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.124 [2024-11-20 14:48:38.038088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.124 [2024-11-20 14:48:38.038093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:38.038097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:38.049900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:38.050358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:38.050388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:38.050397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:38.050565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.124 [2024-11-20 14:48:38.050718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.124 [2024-11-20 14:48:38.050726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.124 [2024-11-20 14:48:38.050735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:38.050741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:38.062545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:38.063000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:38.063016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:38.063022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:38.063172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.124 [2024-11-20 14:48:38.063328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.124 [2024-11-20 14:48:38.063335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.124 [2024-11-20 14:48:38.063341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.124 [2024-11-20 14:48:38.063346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.124 [2024-11-20 14:48:38.075135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.124 [2024-11-20 14:48:38.075728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.124 [2024-11-20 14:48:38.075759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.124 [2024-11-20 14:48:38.075768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.124 [2024-11-20 14:48:38.075933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.125 [2024-11-20 14:48:38.076086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.125 [2024-11-20 14:48:38.076093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.125 [2024-11-20 14:48:38.076099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.125 [2024-11-20 14:48:38.076105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.125 [2024-11-20 14:48:38.087759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.125 [2024-11-20 14:48:38.088266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.125 [2024-11-20 14:48:38.088283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.125 [2024-11-20 14:48:38.088289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.125 [2024-11-20 14:48:38.088440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.125 [2024-11-20 14:48:38.088591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.125 [2024-11-20 14:48:38.088597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.125 [2024-11-20 14:48:38.088602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.125 [2024-11-20 14:48:38.088607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.125 [2024-11-20 14:48:38.100401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.125 [2024-11-20 14:48:38.100857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.125 [2024-11-20 14:48:38.100871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.125 [2024-11-20 14:48:38.100876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.125 [2024-11-20 14:48:38.101025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.125 [2024-11-20 14:48:38.101175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.125 [2024-11-20 14:48:38.101183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.125 [2024-11-20 14:48:38.101188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.125 [2024-11-20 14:48:38.101193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.125 [2024-11-20 14:48:38.113126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.125 [2024-11-20 14:48:38.113689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.125 [2024-11-20 14:48:38.113720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.125 [2024-11-20 14:48:38.113729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.125 [2024-11-20 14:48:38.113894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.125 [2024-11-20 14:48:38.114047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.125 [2024-11-20 14:48:38.114054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.125 [2024-11-20 14:48:38.114060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.125 [2024-11-20 14:48:38.114066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.125 [2024-11-20 14:48:38.125727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.125 [2024-11-20 14:48:38.126195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.125 [2024-11-20 14:48:38.126211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.125 [2024-11-20 14:48:38.126217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.125 [2024-11-20 14:48:38.126371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.125 [2024-11-20 14:48:38.126522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.125 [2024-11-20 14:48:38.126528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.125 [2024-11-20 14:48:38.126534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.125 [2024-11-20 14:48:38.126539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.125 [2024-11-20 14:48:38.138345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.125 [2024-11-20 14:48:38.138827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.125 [2024-11-20 14:48:38.138845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.125 [2024-11-20 14:48:38.138851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.125 [2024-11-20 14:48:38.139000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.125 [2024-11-20 14:48:38.139150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.125 [2024-11-20 14:48:38.139157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.125 [2024-11-20 14:48:38.139162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.125 [2024-11-20 14:48:38.139167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.125 [2024-11-20 14:48:38.150951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.125 [2024-11-20 14:48:38.151535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.125 [2024-11-20 14:48:38.151566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.125 [2024-11-20 14:48:38.151575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.125 [2024-11-20 14:48:38.151740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.125 [2024-11-20 14:48:38.151893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.125 [2024-11-20 14:48:38.151900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.125 [2024-11-20 14:48:38.151906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.125 [2024-11-20 14:48:38.151912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.125 [2024-11-20 14:48:38.163552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.125 [2024-11-20 14:48:38.164043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.125 [2024-11-20 14:48:38.164058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.125 [2024-11-20 14:48:38.164064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.125 [2024-11-20 14:48:38.164214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.125 [2024-11-20 14:48:38.164369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.125 [2024-11-20 14:48:38.164375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.125 [2024-11-20 14:48:38.164381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.125 [2024-11-20 14:48:38.164386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.125 [2024-11-20 14:48:38.176154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.125 [2024-11-20 14:48:38.176611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.125 [2024-11-20 14:48:38.176625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.125 [2024-11-20 14:48:38.176630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.125 [2024-11-20 14:48:38.176784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.125 [2024-11-20 14:48:38.176934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.125 [2024-11-20 14:48:38.176941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.125 [2024-11-20 14:48:38.176946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.125 [2024-11-20 14:48:38.176951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.388 [2024-11-20 14:48:38.188746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.388 [2024-11-20 14:48:38.189191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.388 [2024-11-20 14:48:38.189205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.388 [2024-11-20 14:48:38.189211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.388 [2024-11-20 14:48:38.189367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.388 [2024-11-20 14:48:38.189517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.388 [2024-11-20 14:48:38.189524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.388 [2024-11-20 14:48:38.189529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.388 [2024-11-20 14:48:38.189534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.388 [2024-11-20 14:48:38.201468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.388 [2024-11-20 14:48:38.201825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.388 [2024-11-20 14:48:38.201839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.388 [2024-11-20 14:48:38.201844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.388 [2024-11-20 14:48:38.201994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.388 [2024-11-20 14:48:38.202144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.388 [2024-11-20 14:48:38.202150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.388 [2024-11-20 14:48:38.202156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.388 [2024-11-20 14:48:38.202161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.388 [2024-11-20 14:48:38.214075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.388 [2024-11-20 14:48:38.214665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.388 [2024-11-20 14:48:38.214705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.388 [2024-11-20 14:48:38.214713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.388 [2024-11-20 14:48:38.214878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.388 [2024-11-20 14:48:38.215032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.388 [2024-11-20 14:48:38.215040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.388 [2024-11-20 14:48:38.215050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.388 [2024-11-20 14:48:38.215056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.388 [2024-11-20 14:48:38.226701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.388 [2024-11-20 14:48:38.227239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.388 [2024-11-20 14:48:38.227276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.388 [2024-11-20 14:48:38.227285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.388 [2024-11-20 14:48:38.227450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.388 [2024-11-20 14:48:38.227604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.388 [2024-11-20 14:48:38.227611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.388 [2024-11-20 14:48:38.227616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.388 [2024-11-20 14:48:38.227622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.388 [2024-11-20 14:48:38.239420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.388 [2024-11-20 14:48:38.240019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.388 [2024-11-20 14:48:38.240050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.388 [2024-11-20 14:48:38.240058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.388 [2024-11-20 14:48:38.240224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.388 [2024-11-20 14:48:38.240385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.388 [2024-11-20 14:48:38.240393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.388 [2024-11-20 14:48:38.240399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.388 [2024-11-20 14:48:38.240405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.388 [2024-11-20 14:48:38.252132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.388 [2024-11-20 14:48:38.252594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.388 [2024-11-20 14:48:38.252610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.388 [2024-11-20 14:48:38.252616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.388 [2024-11-20 14:48:38.252766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.388 [2024-11-20 14:48:38.252916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.388 [2024-11-20 14:48:38.252923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.388 [2024-11-20 14:48:38.252929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.388 [2024-11-20 14:48:38.252934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.388 [2024-11-20 14:48:38.264727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.388 [2024-11-20 14:48:38.265312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.388 [2024-11-20 14:48:38.265343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.388 [2024-11-20 14:48:38.265352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.388 [2024-11-20 14:48:38.265518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.388 [2024-11-20 14:48:38.265672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.388 [2024-11-20 14:48:38.265679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.388 [2024-11-20 14:48:38.265685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.388 [2024-11-20 14:48:38.265691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.388 [2024-11-20 14:48:38.277332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.388 [2024-11-20 14:48:38.277712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.388 [2024-11-20 14:48:38.277728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.388 [2024-11-20 14:48:38.277733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.388 [2024-11-20 14:48:38.277883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.388 [2024-11-20 14:48:38.278033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.388 [2024-11-20 14:48:38.278040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.388 [2024-11-20 14:48:38.278046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.388 [2024-11-20 14:48:38.278051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.388 [2024-11-20 14:48:38.289963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.388 [2024-11-20 14:48:38.290419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.388 [2024-11-20 14:48:38.290433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.388 [2024-11-20 14:48:38.290439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.388 [2024-11-20 14:48:38.290588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.388 [2024-11-20 14:48:38.290738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.388 [2024-11-20 14:48:38.290745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.388 [2024-11-20 14:48:38.290750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.290755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.302675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.303162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.303182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.303188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.303343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.303494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.389 [2024-11-20 14:48:38.303500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.389 [2024-11-20 14:48:38.303506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.303510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.315273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.315812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.315844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.315852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.316017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.316170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.389 [2024-11-20 14:48:38.316177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.389 [2024-11-20 14:48:38.316183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.316189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.327967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.328563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.328597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.328605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.328770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.328923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.389 [2024-11-20 14:48:38.328930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.389 [2024-11-20 14:48:38.328936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.328942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.340585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.341186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.341218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.341227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.341403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.341557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.389 [2024-11-20 14:48:38.341564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.389 [2024-11-20 14:48:38.341570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.341576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.353222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.353798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.353829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.353838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.354004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.354165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.389 [2024-11-20 14:48:38.354173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.389 [2024-11-20 14:48:38.354179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.354185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.365831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.366173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.366192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.366198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.366357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.366509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.389 [2024-11-20 14:48:38.366516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.389 [2024-11-20 14:48:38.366522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.366527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.378434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.378976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.379008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.379016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.379182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.379343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.389 [2024-11-20 14:48:38.379350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.389 [2024-11-20 14:48:38.379360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.379366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.391022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.391585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.391617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.391626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.391791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.391944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.389 [2024-11-20 14:48:38.391952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.389 [2024-11-20 14:48:38.391958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.391964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.403618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.404176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.404208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.404217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.404390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.404544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.389 [2024-11-20 14:48:38.404551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.389 [2024-11-20 14:48:38.404557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.389 [2024-11-20 14:48:38.404563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.389 [2024-11-20 14:48:38.416242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.389 [2024-11-20 14:48:38.416846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.389 [2024-11-20 14:48:38.416877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.389 [2024-11-20 14:48:38.416886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.389 [2024-11-20 14:48:38.417051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.389 [2024-11-20 14:48:38.417204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.390 [2024-11-20 14:48:38.417211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.390 [2024-11-20 14:48:38.417217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.390 [2024-11-20 14:48:38.417222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.390 [2024-11-20 14:48:38.428880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.390 [2024-11-20 14:48:38.429483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.390 [2024-11-20 14:48:38.429515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.390 [2024-11-20 14:48:38.429524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.390 [2024-11-20 14:48:38.429689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.390 [2024-11-20 14:48:38.429842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.390 [2024-11-20 14:48:38.429850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.390 [2024-11-20 14:48:38.429855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.390 [2024-11-20 14:48:38.429861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.390 [2024-11-20 14:48:38.441515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.390 [2024-11-20 14:48:38.441937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.390 [2024-11-20 14:48:38.441969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.390 [2024-11-20 14:48:38.441977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.390 [2024-11-20 14:48:38.442143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.390 [2024-11-20 14:48:38.442305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.390 [2024-11-20 14:48:38.442312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.390 [2024-11-20 14:48:38.442318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.390 [2024-11-20 14:48:38.442324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.651 [2024-11-20 14:48:38.454122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.651 [2024-11-20 14:48:38.454681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-11-20 14:48:38.454713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.651 [2024-11-20 14:48:38.454722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.651 [2024-11-20 14:48:38.454894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.651 [2024-11-20 14:48:38.455049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.651 [2024-11-20 14:48:38.455056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.651 [2024-11-20 14:48:38.455062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.651 [2024-11-20 14:48:38.455068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.651 [2024-11-20 14:48:38.466720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.651 [2024-11-20 14:48:38.467309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-11-20 14:48:38.467344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.651 [2024-11-20 14:48:38.467352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.651 [2024-11-20 14:48:38.467518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.651 [2024-11-20 14:48:38.467671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.651 [2024-11-20 14:48:38.467678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.651 [2024-11-20 14:48:38.467684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.651 [2024-11-20 14:48:38.467690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.651 [2024-11-20 14:48:38.479340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.651 [2024-11-20 14:48:38.479836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.479852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.479857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.480008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.480158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.480166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.480173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.480179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.491961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.652 [2024-11-20 14:48:38.492534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.492565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.492574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.492740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.492893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.492900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.492906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.492912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.504553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.652 [2024-11-20 14:48:38.505148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.505179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.505188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.505362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.505519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.505526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.505532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.505538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.517174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.652 [2024-11-20 14:48:38.517745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.517776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.517786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.517951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.518104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.518111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.518117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.518123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.529773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.652 [2024-11-20 14:48:38.530387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.530418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.530427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.530594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.530747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.530755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.530760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.530767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.542427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.652 [2024-11-20 14:48:38.543014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.543045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.543054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.543219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.543379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.543387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.543396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.543402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.555059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.652 [2024-11-20 14:48:38.555617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.555648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.555657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.555822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.555975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.555982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.555988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.555994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.567782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.652 [2024-11-20 14:48:38.568322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.568353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.568362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.568529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.568682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.568689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.568694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.568701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.580498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.652 [2024-11-20 14:48:38.581093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.581125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.581134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.581306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.581460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.581467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.581473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.581479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.593118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.652 [2024-11-20 14:48:38.593674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-11-20 14:48:38.593706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.652 [2024-11-20 14:48:38.593714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.652 [2024-11-20 14:48:38.593880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.652 [2024-11-20 14:48:38.594033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.652 [2024-11-20 14:48:38.594040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.652 [2024-11-20 14:48:38.594046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.652 [2024-11-20 14:48:38.594051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.652 [2024-11-20 14:48:38.605707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.653 [2024-11-20 14:48:38.606164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-11-20 14:48:38.606180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.653 [2024-11-20 14:48:38.606186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.653 [2024-11-20 14:48:38.606341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.653 [2024-11-20 14:48:38.606492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.653 [2024-11-20 14:48:38.606499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.653 [2024-11-20 14:48:38.606504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.653 [2024-11-20 14:48:38.606509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.653 [2024-11-20 14:48:38.618315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.653 [2024-11-20 14:48:38.618822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-11-20 14:48:38.618836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.653 [2024-11-20 14:48:38.618842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.653 [2024-11-20 14:48:38.618992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.653 [2024-11-20 14:48:38.619144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.653 [2024-11-20 14:48:38.619151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.653 [2024-11-20 14:48:38.619156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.653 [2024-11-20 14:48:38.619161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.653 [2024-11-20 14:48:38.630930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.653 [2024-11-20 14:48:38.631383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-11-20 14:48:38.631397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.653 [2024-11-20 14:48:38.631406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.653 [2024-11-20 14:48:38.631556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.653 [2024-11-20 14:48:38.631706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.653 [2024-11-20 14:48:38.631712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.653 [2024-11-20 14:48:38.631718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.653 [2024-11-20 14:48:38.631723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.653 [2024-11-20 14:48:38.643516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.653 [2024-11-20 14:48:38.643972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-11-20 14:48:38.643985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.653 [2024-11-20 14:48:38.643991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.653 [2024-11-20 14:48:38.644140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.653 [2024-11-20 14:48:38.644294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.653 [2024-11-20 14:48:38.644301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.653 [2024-11-20 14:48:38.644306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.653 [2024-11-20 14:48:38.644311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.653 [2024-11-20 14:48:38.656236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.653 [2024-11-20 14:48:38.656825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-11-20 14:48:38.656856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.653 [2024-11-20 14:48:38.656864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.653 [2024-11-20 14:48:38.657030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.653 [2024-11-20 14:48:38.657182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.653 [2024-11-20 14:48:38.657189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.653 [2024-11-20 14:48:38.657195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.653 [2024-11-20 14:48:38.657202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.653 7611.00 IOPS, 29.73 MiB/s [2024-11-20T13:48:38.713Z] [2024-11-20 14:48:38.668842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.653 [2024-11-20 14:48:38.669372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-11-20 14:48:38.669403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.653 [2024-11-20 14:48:38.669413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.653 [2024-11-20 14:48:38.669583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.653 [2024-11-20 14:48:38.669737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.653 [2024-11-20 14:48:38.669744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.653 [2024-11-20 14:48:38.669751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.653 [2024-11-20 14:48:38.669757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.653 [2024-11-20 14:48:38.681545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.653 [2024-11-20 14:48:38.682144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-11-20 14:48:38.682175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.653 [2024-11-20 14:48:38.682184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.653 [2024-11-20 14:48:38.682356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.653 [2024-11-20 14:48:38.682509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.653 [2024-11-20 14:48:38.682516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.653 [2024-11-20 14:48:38.682522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.653 [2024-11-20 14:48:38.682528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.653 [2024-11-20 14:48:38.694315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.653 [2024-11-20 14:48:38.694757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-11-20 14:48:38.694788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.653 [2024-11-20 14:48:38.694797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.653 [2024-11-20 14:48:38.694963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.653 [2024-11-20 14:48:38.695116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.653 [2024-11-20 14:48:38.695123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.653 [2024-11-20 14:48:38.695128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.653 [2024-11-20 14:48:38.695134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.653 [2024-11-20 14:48:38.706927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.653 [2024-11-20 14:48:38.707530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-11-20 14:48:38.707561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.653 [2024-11-20 14:48:38.707570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.653 [2024-11-20 14:48:38.707735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.653 [2024-11-20 14:48:38.707889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.653 [2024-11-20 14:48:38.707896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.653 [2024-11-20 14:48:38.707905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.653 [2024-11-20 14:48:38.707911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.914 [2024-11-20 14:48:38.719546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.914 [2024-11-20 14:48:38.720147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.914 [2024-11-20 14:48:38.720178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.914 [2024-11-20 14:48:38.720187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.914 [2024-11-20 14:48:38.720359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.720513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.720520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.720526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.720532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.732163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.732739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.732771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.915 [2024-11-20 14:48:38.732780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.915 [2024-11-20 14:48:38.732945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.733099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.733106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.733111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.733118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.744779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.745381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.745413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.915 [2024-11-20 14:48:38.745422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.915 [2024-11-20 14:48:38.745587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.745740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.745747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.745752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.745759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.757418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.757878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.757910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.915 [2024-11-20 14:48:38.757918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.915 [2024-11-20 14:48:38.758085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.758238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.758251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.758257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.758263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.770045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.770580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.770611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.915 [2024-11-20 14:48:38.770620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.915 [2024-11-20 14:48:38.770785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.770939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.770946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.770952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.770959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.782731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.783331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.783362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.915 [2024-11-20 14:48:38.783371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.915 [2024-11-20 14:48:38.783538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.783691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.783698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.783704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.783711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.795355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.795953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.795988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.915 [2024-11-20 14:48:38.795996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.915 [2024-11-20 14:48:38.796161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.796322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.796330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.796336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.796342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.807986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.808571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.808602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.915 [2024-11-20 14:48:38.808611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.915 [2024-11-20 14:48:38.808776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.808929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.808936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.808942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.808949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.820579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.821154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.821185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.915 [2024-11-20 14:48:38.821194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.915 [2024-11-20 14:48:38.821368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.821522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.821529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.821535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.821541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.833195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.833751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.833782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.915 [2024-11-20 14:48:38.833791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.915 [2024-11-20 14:48:38.833960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.915 [2024-11-20 14:48:38.834113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.915 [2024-11-20 14:48:38.834120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.915 [2024-11-20 14:48:38.834126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.915 [2024-11-20 14:48:38.834132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.915 [2024-11-20 14:48:38.845799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.915 [2024-11-20 14:48:38.846289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.915 [2024-11-20 14:48:38.846321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.846330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.846496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.846649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.846656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.846662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.846668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.858461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.859060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.916 [2024-11-20 14:48:38.859091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.859100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.859273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.859426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.859434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.859439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.859445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.871084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.871625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.916 [2024-11-20 14:48:38.871656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.871665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.871830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.871983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.871990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.871999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.872005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.883785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.884275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.916 [2024-11-20 14:48:38.884291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.884297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.884447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.884597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.884604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.884610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.884615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.896383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.896924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.916 [2024-11-20 14:48:38.896955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.896964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.897129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.897290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.897298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.897304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.897310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.909090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.909675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.916 [2024-11-20 14:48:38.909706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.909715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.909881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.910034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.910041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.910047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.910052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.921700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.922316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.916 [2024-11-20 14:48:38.922348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.922357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.922525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.922678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.922685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.922691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.922697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.934348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.934944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.916 [2024-11-20 14:48:38.934976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.934985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.935150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.935310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.935318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.935324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.935329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.946977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.947596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.916 [2024-11-20 14:48:38.947627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.947637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.947802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.947955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.947962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.947968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.947973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.959623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.960207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.916 [2024-11-20 14:48:38.960241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.916 [2024-11-20 14:48:38.960256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.916 [2024-11-20 14:48:38.960421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.916 [2024-11-20 14:48:38.960574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.916 [2024-11-20 14:48:38.960581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.916 [2024-11-20 14:48:38.960588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.916 [2024-11-20 14:48:38.960594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:31.916 [2024-11-20 14:48:38.972225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:31.916 [2024-11-20 14:48:38.972774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.917 [2024-11-20 14:48:38.972805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:31.917 [2024-11-20 14:48:38.972814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:31.917 [2024-11-20 14:48:38.972979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:31.917 [2024-11-20 14:48:38.973132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:31.917 [2024-11-20 14:48:38.973139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:31.917 [2024-11-20 14:48:38.973145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:31.917 [2024-11-20 14:48:38.973151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.178 [2024-11-20 14:48:38.984937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.178 [2024-11-20 14:48:38.985562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-11-20 14:48:38.985593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.178 [2024-11-20 14:48:38.985603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.178 [2024-11-20 14:48:38.985769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.178 [2024-11-20 14:48:38.985922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.178 [2024-11-20 14:48:38.985930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.178 [2024-11-20 14:48:38.985936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.178 [2024-11-20 14:48:38.985942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.178 [2024-11-20 14:48:38.997585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.178 [2024-11-20 14:48:38.998192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-11-20 14:48:38.998224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.178 [2024-11-20 14:48:38.998233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.178 [2024-11-20 14:48:38.998408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.178 [2024-11-20 14:48:38.998562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.178 [2024-11-20 14:48:38.998569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.178 [2024-11-20 14:48:38.998575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.178 [2024-11-20 14:48:38.998581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.178 [2024-11-20 14:48:39.010221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.179 [2024-11-20 14:48:39.010817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-11-20 14:48:39.010849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.179 [2024-11-20 14:48:39.010858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.179 [2024-11-20 14:48:39.011023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.179 [2024-11-20 14:48:39.011176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.179 [2024-11-20 14:48:39.011183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.179 [2024-11-20 14:48:39.011189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.179 [2024-11-20 14:48:39.011194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.179 [2024-11-20 14:48:39.022845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.179 [2024-11-20 14:48:39.023284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-11-20 14:48:39.023314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.179 [2024-11-20 14:48:39.023323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.179 [2024-11-20 14:48:39.023488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.179 [2024-11-20 14:48:39.023641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.179 [2024-11-20 14:48:39.023648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.179 [2024-11-20 14:48:39.023653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.179 [2024-11-20 14:48:39.023660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.179 [2024-11-20 14:48:39.035475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.179 [2024-11-20 14:48:39.035977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-11-20 14:48:39.035993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.179 [2024-11-20 14:48:39.036000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.179 [2024-11-20 14:48:39.036150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.179 [2024-11-20 14:48:39.036306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.179 [2024-11-20 14:48:39.036313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.179 [2024-11-20 14:48:39.036321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.179 [2024-11-20 14:48:39.036327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.179 [2024-11-20 14:48:39.048109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.179 [2024-11-20 14:48:39.048582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-11-20 14:48:39.048597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.179 [2024-11-20 14:48:39.048602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.179 [2024-11-20 14:48:39.048752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.179 [2024-11-20 14:48:39.048902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.179 [2024-11-20 14:48:39.048909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.179 [2024-11-20 14:48:39.048914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.179 [2024-11-20 14:48:39.048919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.179 [2024-11-20 14:48:39.060822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.179 [2024-11-20 14:48:39.061350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-11-20 14:48:39.061381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.179 [2024-11-20 14:48:39.061390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.179 [2024-11-20 14:48:39.061558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.179 [2024-11-20 14:48:39.061711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.179 [2024-11-20 14:48:39.061718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.179 [2024-11-20 14:48:39.061724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.179 [2024-11-20 14:48:39.061731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.179 [2024-11-20 14:48:39.073524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.179 [2024-11-20 14:48:39.074123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.180 [2024-11-20 14:48:39.074154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.180 [2024-11-20 14:48:39.074163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.180 [2024-11-20 14:48:39.074336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.180 [2024-11-20 14:48:39.074489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.180 [2024-11-20 14:48:39.074496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.180 [2024-11-20 14:48:39.074502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.180 [2024-11-20 14:48:39.074508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.180 [2024-11-20 14:48:39.086165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.180 [2024-11-20 14:48:39.086761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.180 [2024-11-20 14:48:39.086792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.180 [2024-11-20 14:48:39.086801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.180 [2024-11-20 14:48:39.086966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.180 [2024-11-20 14:48:39.087119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.180 [2024-11-20 14:48:39.087126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.180 [2024-11-20 14:48:39.087132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.180 [2024-11-20 14:48:39.087138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.180 [2024-11-20 14:48:39.098778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.180 [2024-11-20 14:48:39.099294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.180 [2024-11-20 14:48:39.099316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.180 [2024-11-20 14:48:39.099322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.180 [2024-11-20 14:48:39.099478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.180 [2024-11-20 14:48:39.099629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.180 [2024-11-20 14:48:39.099635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.180 [2024-11-20 14:48:39.099641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.180 [2024-11-20 14:48:39.099646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.180 [2024-11-20 14:48:39.111421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.180 [2024-11-20 14:48:39.111902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.180 [2024-11-20 14:48:39.111917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.180 [2024-11-20 14:48:39.111922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.180 [2024-11-20 14:48:39.112072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.180 [2024-11-20 14:48:39.112222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.180 [2024-11-20 14:48:39.112229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.180 [2024-11-20 14:48:39.112234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.180 [2024-11-20 14:48:39.112239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.180 [2024-11-20 14:48:39.124038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.180 [2024-11-20 14:48:39.124591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.180 [2024-11-20 14:48:39.124626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.180 [2024-11-20 14:48:39.124636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.180 [2024-11-20 14:48:39.124801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.180 [2024-11-20 14:48:39.124954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.180 [2024-11-20 14:48:39.124962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.180 [2024-11-20 14:48:39.124968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.180 [2024-11-20 14:48:39.124974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.180 [2024-11-20 14:48:39.136622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.180 [2024-11-20 14:48:39.137193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.180 [2024-11-20 14:48:39.137224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.180 [2024-11-20 14:48:39.137233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.180 [2024-11-20 14:48:39.137410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.180 [2024-11-20 14:48:39.137564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.180 [2024-11-20 14:48:39.137572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.180 [2024-11-20 14:48:39.137578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.180 [2024-11-20 14:48:39.137584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.180 [2024-11-20 14:48:39.149234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.180 [2024-11-20 14:48:39.149861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.180 [2024-11-20 14:48:39.149893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.180 [2024-11-20 14:48:39.149902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.180 [2024-11-20 14:48:39.150067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.180 [2024-11-20 14:48:39.150220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.180 [2024-11-20 14:48:39.150227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.181 [2024-11-20 14:48:39.150232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.181 [2024-11-20 14:48:39.150238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.181 [2024-11-20 14:48:39.161897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.181 [2024-11-20 14:48:39.162571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.181 [2024-11-20 14:48:39.162603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.181 [2024-11-20 14:48:39.162611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.181 [2024-11-20 14:48:39.162781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.181 [2024-11-20 14:48:39.162934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.181 [2024-11-20 14:48:39.162941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.181 [2024-11-20 14:48:39.162947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.181 [2024-11-20 14:48:39.162952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.181 [2024-11-20 14:48:39.174610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.181 [2024-11-20 14:48:39.175161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.181 [2024-11-20 14:48:39.175192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.181 [2024-11-20 14:48:39.175202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.181 [2024-11-20 14:48:39.175376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.181 [2024-11-20 14:48:39.175530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.181 [2024-11-20 14:48:39.175537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.181 [2024-11-20 14:48:39.175543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.181 [2024-11-20 14:48:39.175550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.181 [2024-11-20 14:48:39.187206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.181 [2024-11-20 14:48:39.187759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.181 [2024-11-20 14:48:39.187791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.181 [2024-11-20 14:48:39.187800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.181 [2024-11-20 14:48:39.187965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.181 [2024-11-20 14:48:39.188117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.181 [2024-11-20 14:48:39.188125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.181 [2024-11-20 14:48:39.188131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.181 [2024-11-20 14:48:39.188138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.181 [2024-11-20 14:48:39.199924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.181 [2024-11-20 14:48:39.200384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.181 [2024-11-20 14:48:39.200401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.181 [2024-11-20 14:48:39.200407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.181 [2024-11-20 14:48:39.200557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.181 [2024-11-20 14:48:39.200707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.181 [2024-11-20 14:48:39.200714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.181 [2024-11-20 14:48:39.200723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.181 [2024-11-20 14:48:39.200728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.181 [2024-11-20 14:48:39.212513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.181 [2024-11-20 14:48:39.213103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.181 [2024-11-20 14:48:39.213134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.181 [2024-11-20 14:48:39.213143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.181 [2024-11-20 14:48:39.213317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.181 [2024-11-20 14:48:39.213470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.181 [2024-11-20 14:48:39.213477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.181 [2024-11-20 14:48:39.213483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.181 [2024-11-20 14:48:39.213489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.181 [2024-11-20 14:48:39.225135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.181 [2024-11-20 14:48:39.226221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.181 [2024-11-20 14:48:39.226255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.181 [2024-11-20 14:48:39.226264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.181 [2024-11-20 14:48:39.226430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.181 [2024-11-20 14:48:39.226583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.181 [2024-11-20 14:48:39.226590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.181 [2024-11-20 14:48:39.226596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.181 [2024-11-20 14:48:39.226602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.441 [2024-11-20 14:48:39.237857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.238331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.238347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.238353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.238505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.238655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.238662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.238669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.442 [2024-11-20 14:48:39.238674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.442 [2024-11-20 14:48:39.250493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.251050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.251081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.251090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.251262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.251416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.251423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.251429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.442 [2024-11-20 14:48:39.251435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.442 [2024-11-20 14:48:39.263083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.263648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.263680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.263689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.263854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.264008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.264015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.264020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.442 [2024-11-20 14:48:39.264027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.442 [2024-11-20 14:48:39.275674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.276213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.276249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.276258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.276426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.276579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.276586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.276593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.442 [2024-11-20 14:48:39.276600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.442 [2024-11-20 14:48:39.288312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.288807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.288826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.288832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.288982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.289132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.289139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.289144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.442 [2024-11-20 14:48:39.289149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.442 [2024-11-20 14:48:39.300922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.301279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.301293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.301299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.301449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.301599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.301606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.301611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.442 [2024-11-20 14:48:39.301616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.442 [2024-11-20 14:48:39.313536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.314100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.314132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.314141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.314313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.314467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.314475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.314480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.442 [2024-11-20 14:48:39.314486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.442 [2024-11-20 14:48:39.326126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.326724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.326755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.326764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.326933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.327086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.327094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.327099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.442 [2024-11-20 14:48:39.327105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.442 [2024-11-20 14:48:39.338753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.339213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.339249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.339258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.339426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.339579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.339585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.339592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.442 [2024-11-20 14:48:39.339598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.442 [2024-11-20 14:48:39.351387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.442 [2024-11-20 14:48:39.351952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.442 [2024-11-20 14:48:39.351984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.442 [2024-11-20 14:48:39.351993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.442 [2024-11-20 14:48:39.352159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.442 [2024-11-20 14:48:39.352319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.442 [2024-11-20 14:48:39.352327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.442 [2024-11-20 14:48:39.352333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.352339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.363990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.364573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.364605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.364613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.364779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.364932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.364939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.443 [2024-11-20 14:48:39.364948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.364954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.376600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.377157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.377188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.377197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.377369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.377523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.377530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.443 [2024-11-20 14:48:39.377537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.377543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.389185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.389661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.389677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.389683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.389833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.389983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.389990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.443 [2024-11-20 14:48:39.389996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.390001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.401768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.402124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.402138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.402143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.402297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.402447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.402454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.443 [2024-11-20 14:48:39.402459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.402464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.414390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.414902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.414933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.414943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.415108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.415269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.415277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.443 [2024-11-20 14:48:39.415283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.415289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.427077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.427688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.427719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.427728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.427894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.428048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.428055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.443 [2024-11-20 14:48:39.428061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.428067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.439727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.440190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.440206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.440212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.440367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.440517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.440524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.443 [2024-11-20 14:48:39.440529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.440534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.452345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.452919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.452954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.452962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.453127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.453286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.453294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.443 [2024-11-20 14:48:39.453300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.453306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.464957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.465576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.465607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.465616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.465781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.465934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.465941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.443 [2024-11-20 14:48:39.465947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.443 [2024-11-20 14:48:39.465953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.443 [2024-11-20 14:48:39.477599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.443 [2024-11-20 14:48:39.478101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.443 [2024-11-20 14:48:39.478117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.443 [2024-11-20 14:48:39.478122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.443 [2024-11-20 14:48:39.478277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.443 [2024-11-20 14:48:39.478428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.443 [2024-11-20 14:48:39.478435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.444 [2024-11-20 14:48:39.478441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.444 [2024-11-20 14:48:39.478446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.444 [2024-11-20 14:48:39.490224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.444 [2024-11-20 14:48:39.490690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.444 [2024-11-20 14:48:39.490705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.444 [2024-11-20 14:48:39.490711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.444 [2024-11-20 14:48:39.490865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.444 [2024-11-20 14:48:39.491016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.444 [2024-11-20 14:48:39.491022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.444 [2024-11-20 14:48:39.491028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.444 [2024-11-20 14:48:39.491033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.704 [2024-11-20 14:48:39.502810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.704 [2024-11-20 14:48:39.503181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.704 [2024-11-20 14:48:39.503194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.704 [2024-11-20 14:48:39.503200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.704 [2024-11-20 14:48:39.503352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.704 [2024-11-20 14:48:39.503503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.704 [2024-11-20 14:48:39.503509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.704 [2024-11-20 14:48:39.503516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.704 [2024-11-20 14:48:39.503521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.704 [2024-11-20 14:48:39.515435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.704 [2024-11-20 14:48:39.515964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.704 [2024-11-20 14:48:39.515977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.704 [2024-11-20 14:48:39.515983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.704 [2024-11-20 14:48:39.516132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.704 [2024-11-20 14:48:39.516286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.704 [2024-11-20 14:48:39.516293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.704 [2024-11-20 14:48:39.516298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.704 [2024-11-20 14:48:39.516303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.704 [2024-11-20 14:48:39.528092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.704 [2024-11-20 14:48:39.528586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.704 [2024-11-20 14:48:39.528617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.704 [2024-11-20 14:48:39.528626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.704 [2024-11-20 14:48:39.528791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.704 [2024-11-20 14:48:39.528944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.704 [2024-11-20 14:48:39.528951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.704 [2024-11-20 14:48:39.528961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.704 [2024-11-20 14:48:39.528967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.704 [2024-11-20 14:48:39.540768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.704 [2024-11-20 14:48:39.541225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.704 [2024-11-20 14:48:39.541240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.704 [2024-11-20 14:48:39.541250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.704 [2024-11-20 14:48:39.541401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.704 [2024-11-20 14:48:39.541551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.704 [2024-11-20 14:48:39.541557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.704 [2024-11-20 14:48:39.541562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.704 [2024-11-20 14:48:39.541568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.704 [2024-11-20 14:48:39.553358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.704 [2024-11-20 14:48:39.553923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.704 [2024-11-20 14:48:39.553954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.704 [2024-11-20 14:48:39.553963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.704 [2024-11-20 14:48:39.554128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.704 [2024-11-20 14:48:39.554290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.704 [2024-11-20 14:48:39.554298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.704 [2024-11-20 14:48:39.554304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.704 [2024-11-20 14:48:39.554310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.704 [2024-11-20 14:48:39.565977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.704 [2024-11-20 14:48:39.566442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.704 [2024-11-20 14:48:39.566458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.704 [2024-11-20 14:48:39.566464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.704 [2024-11-20 14:48:39.566614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.704 [2024-11-20 14:48:39.566765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.704 [2024-11-20 14:48:39.566772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.704 [2024-11-20 14:48:39.566777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.704 [2024-11-20 14:48:39.566782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.704 [2024-11-20 14:48:39.578581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.704 [2024-11-20 14:48:39.579072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.704 [2024-11-20 14:48:39.579086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.704 [2024-11-20 14:48:39.579092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.704 [2024-11-20 14:48:39.579242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.704 [2024-11-20 14:48:39.579398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.704 [2024-11-20 14:48:39.579411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.704 [2024-11-20 14:48:39.579416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.704 [2024-11-20 14:48:39.579422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.704 [2024-11-20 14:48:39.591222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.704 [2024-11-20 14:48:39.591788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.704 [2024-11-20 14:48:39.591821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.704 [2024-11-20 14:48:39.591830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.704 [2024-11-20 14:48:39.591995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.704 [2024-11-20 14:48:39.592148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.704 [2024-11-20 14:48:39.592155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.704 [2024-11-20 14:48:39.592161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.704 [2024-11-20 14:48:39.592167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.704 [2024-11-20 14:48:39.603835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.604449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.604481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.604489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.604656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.604810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.604817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.604823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.705 [2024-11-20 14:48:39.604829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.705 [2024-11-20 14:48:39.616499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.617000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.617019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.617026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.617176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.617333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.617340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.617346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.705 [2024-11-20 14:48:39.617352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.705 [2024-11-20 14:48:39.629165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.629613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.629645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.629654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.629821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.629974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.629982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.629987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.705 [2024-11-20 14:48:39.629993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.705 [2024-11-20 14:48:39.641799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.642321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.642352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.642361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.642529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.642682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.642689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.642695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.705 [2024-11-20 14:48:39.642701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.705 [2024-11-20 14:48:39.654491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.654855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.654871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.654877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.655031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.655181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.655187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.655193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.705 [2024-11-20 14:48:39.655198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.705 6088.80 IOPS, 23.78 MiB/s [2024-11-20T13:48:39.765Z] [2024-11-20 14:48:39.668305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.668799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.668813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.668819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.668969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.669120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.669127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.669132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.705 [2024-11-20 14:48:39.669137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.705 [2024-11-20 14:48:39.680928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.681529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.681560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.681569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.681734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.681888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.681895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.681900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.705 [2024-11-20 14:48:39.681906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.705 [2024-11-20 14:48:39.693715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.694187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.694203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.694209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.694363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.694514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.694525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.694530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.705 [2024-11-20 14:48:39.694535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.705 [2024-11-20 14:48:39.706329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.706913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.706943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.706952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.707118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.707276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.707284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.707290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.705 [2024-11-20 14:48:39.707296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.705 [2024-11-20 14:48:39.718943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.705 [2024-11-20 14:48:39.719467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.705 [2024-11-20 14:48:39.719498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.705 [2024-11-20 14:48:39.719507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.705 [2024-11-20 14:48:39.719675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.705 [2024-11-20 14:48:39.719828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.705 [2024-11-20 14:48:39.719835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.705 [2024-11-20 14:48:39.719841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.706 [2024-11-20 14:48:39.719846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.706 [2024-11-20 14:48:39.731645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.706 [2024-11-20 14:48:39.732137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.706 [2024-11-20 14:48:39.732152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.706 [2024-11-20 14:48:39.732159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.706 [2024-11-20 14:48:39.732314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.706 [2024-11-20 14:48:39.732464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.706 [2024-11-20 14:48:39.732471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.706 [2024-11-20 14:48:39.732477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.706 [2024-11-20 14:48:39.732482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.706 [2024-11-20 14:48:39.744282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.706 [2024-11-20 14:48:39.744767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.706 [2024-11-20 14:48:39.744781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.706 [2024-11-20 14:48:39.744787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.706 [2024-11-20 14:48:39.744936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.706 [2024-11-20 14:48:39.745086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.706 [2024-11-20 14:48:39.745093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.706 [2024-11-20 14:48:39.745099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.706 [2024-11-20 14:48:39.745104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.706 [2024-11-20 14:48:39.756904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.706 [2024-11-20 14:48:39.757485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.706 [2024-11-20 14:48:39.757516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.706 [2024-11-20 14:48:39.757525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.706 [2024-11-20 14:48:39.757691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.706 [2024-11-20 14:48:39.757844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.706 [2024-11-20 14:48:39.757851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.706 [2024-11-20 14:48:39.757857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.706 [2024-11-20 14:48:39.757863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.967 [2024-11-20 14:48:39.769539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.770137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.770168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.770177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.770350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.770504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.770511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.968 [2024-11-20 14:48:39.770517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.968 [2024-11-20 14:48:39.770523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.968 [2024-11-20 14:48:39.782158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.782740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.782775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.782784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.782948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.783102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.783109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.968 [2024-11-20 14:48:39.783114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.968 [2024-11-20 14:48:39.783120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.968 [2024-11-20 14:48:39.794771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.795384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.795415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.795424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.795589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.795742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.795749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.968 [2024-11-20 14:48:39.795755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.968 [2024-11-20 14:48:39.795760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.968 [2024-11-20 14:48:39.807398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.807993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.808024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.808033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.808198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.808362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.808370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.968 [2024-11-20 14:48:39.808375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.968 [2024-11-20 14:48:39.808381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.968 [2024-11-20 14:48:39.820014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.820619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.820650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.820659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.820832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.820985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.820992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.968 [2024-11-20 14:48:39.820998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.968 [2024-11-20 14:48:39.821004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.968 [2024-11-20 14:48:39.832637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.833218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.833255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.833264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.833430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.833583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.833590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.968 [2024-11-20 14:48:39.833596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.968 [2024-11-20 14:48:39.833601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.968 [2024-11-20 14:48:39.845255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.845829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.845860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.845869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.846035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.846188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.846195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.968 [2024-11-20 14:48:39.846200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.968 [2024-11-20 14:48:39.846206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.968 [2024-11-20 14:48:39.857849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.858447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.858478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.858487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.858653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.858806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.858816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.968 [2024-11-20 14:48:39.858822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.968 [2024-11-20 14:48:39.858828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.968 [2024-11-20 14:48:39.870497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.871075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.871107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.871115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.871289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.871443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.871450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.968 [2024-11-20 14:48:39.871456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.968 [2024-11-20 14:48:39.871462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.968 [2024-11-20 14:48:39.883096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.968 [2024-11-20 14:48:39.883703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.968 [2024-11-20 14:48:39.883735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.968 [2024-11-20 14:48:39.883743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.968 [2024-11-20 14:48:39.883909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.968 [2024-11-20 14:48:39.884062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.968 [2024-11-20 14:48:39.884069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.884075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.884081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:39.895724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:39.896326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:39.896358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:39.896366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:39.896532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.969 [2024-11-20 14:48:39.896685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.969 [2024-11-20 14:48:39.896692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.896697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.896703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:39.908353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:39.908945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:39.908976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:39.908985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:39.909150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.969 [2024-11-20 14:48:39.909312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.969 [2024-11-20 14:48:39.909320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.909325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.909331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:39.920971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:39.921337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:39.921354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:39.921360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:39.921510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.969 [2024-11-20 14:48:39.921661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.969 [2024-11-20 14:48:39.921667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.921672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.921677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:39.933589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:39.934175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:39.934206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:39.934214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:39.934387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.969 [2024-11-20 14:48:39.934541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.969 [2024-11-20 14:48:39.934549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.934554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.934560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:39.946212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:39.946820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:39.946854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:39.946863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:39.947028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.969 [2024-11-20 14:48:39.947181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.969 [2024-11-20 14:48:39.947188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.947194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.947199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:39.958841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:39.959336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:39.959352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:39.959358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:39.959508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.969 [2024-11-20 14:48:39.959666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.969 [2024-11-20 14:48:39.959673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.959678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.959683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:39.971456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:39.972053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:39.972084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:39.972093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:39.972266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.969 [2024-11-20 14:48:39.972420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.969 [2024-11-20 14:48:39.972427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.972433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.972439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:39.984079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:39.984660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:39.984692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:39.984700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:39.984869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.969 [2024-11-20 14:48:39.985022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.969 [2024-11-20 14:48:39.985029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.985035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.985041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:39.996694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:39.997155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:39.997171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:39.997177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:39.997335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.969 [2024-11-20 14:48:39.997487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.969 [2024-11-20 14:48:39.997494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.969 [2024-11-20 14:48:39.997500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.969 [2024-11-20 14:48:39.997505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.969 [2024-11-20 14:48:40.009418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.969 [2024-11-20 14:48:40.009679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.969 [2024-11-20 14:48:40.009707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.969 [2024-11-20 14:48:40.009716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.969 [2024-11-20 14:48:40.009904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.970 [2024-11-20 14:48:40.010057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.970 [2024-11-20 14:48:40.010065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.970 [2024-11-20 14:48:40.010070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.970 [2024-11-20 14:48:40.010075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:32.970 [2024-11-20 14:48:40.022143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:32.970 [2024-11-20 14:48:40.022590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.970 [2024-11-20 14:48:40.022604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:32.970 [2024-11-20 14:48:40.022610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:32.970 [2024-11-20 14:48:40.022759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:32.970 [2024-11-20 14:48:40.022909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:32.970 [2024-11-20 14:48:40.022915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:32.970 [2024-11-20 14:48:40.022925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:32.970 [2024-11-20 14:48:40.022930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.231 [2024-11-20 14:48:40.034830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.231 [2024-11-20 14:48:40.035392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.231 [2024-11-20 14:48:40.035423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.231 [2024-11-20 14:48:40.035432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.231 [2024-11-20 14:48:40.035599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.231 [2024-11-20 14:48:40.035753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.231 [2024-11-20 14:48:40.035759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.231 [2024-11-20 14:48:40.035765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.231 [2024-11-20 14:48:40.035771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.231 [2024-11-20 14:48:40.047444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.231 [2024-11-20 14:48:40.048004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.231 [2024-11-20 14:48:40.048036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.231 [2024-11-20 14:48:40.048045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.231 [2024-11-20 14:48:40.048210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.231 [2024-11-20 14:48:40.048372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.231 [2024-11-20 14:48:40.048380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.231 [2024-11-20 14:48:40.048385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.231 [2024-11-20 14:48:40.048391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.231 [2024-11-20 14:48:40.060169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.231 [2024-11-20 14:48:40.060621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.231 [2024-11-20 14:48:40.060637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.231 [2024-11-20 14:48:40.060642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.231 [2024-11-20 14:48:40.060793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.231 [2024-11-20 14:48:40.060943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.231 [2024-11-20 14:48:40.060950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.231 [2024-11-20 14:48:40.060956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.231 [2024-11-20 14:48:40.060961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.231 [2024-11-20 14:48:40.072918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.231 [2024-11-20 14:48:40.073550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.231 [2024-11-20 14:48:40.073582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.231 [2024-11-20 14:48:40.073591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.231 [2024-11-20 14:48:40.073756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.231 [2024-11-20 14:48:40.073909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.231 [2024-11-20 14:48:40.073916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.231 [2024-11-20 14:48:40.073922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.231 [2024-11-20 14:48:40.073928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.231 [2024-11-20 14:48:40.085575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.231 [2024-11-20 14:48:40.086063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.231 [2024-11-20 14:48:40.086093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.231 [2024-11-20 14:48:40.086102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.231 [2024-11-20 14:48:40.086278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.231 [2024-11-20 14:48:40.086432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.231 [2024-11-20 14:48:40.086439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.231 [2024-11-20 14:48:40.086445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.231 [2024-11-20 14:48:40.086451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.231 [2024-11-20 14:48:40.098243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.231 [2024-11-20 14:48:40.098832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.231 [2024-11-20 14:48:40.098864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.231 [2024-11-20 14:48:40.098873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.231 [2024-11-20 14:48:40.099038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.231 [2024-11-20 14:48:40.099191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.099198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.099204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.099210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 [2024-11-20 14:48:40.110869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.111480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.232 [2024-11-20 14:48:40.111516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.232 [2024-11-20 14:48:40.111525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.232 [2024-11-20 14:48:40.111691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.232 [2024-11-20 14:48:40.111845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.111853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.111859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.111865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 [2024-11-20 14:48:40.123527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.123992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.232 [2024-11-20 14:48:40.124008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.232 [2024-11-20 14:48:40.124014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.232 [2024-11-20 14:48:40.124164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.232 [2024-11-20 14:48:40.124322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.124329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.124334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.124339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 [2024-11-20 14:48:40.136126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.136681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.232 [2024-11-20 14:48:40.136713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.232 [2024-11-20 14:48:40.136721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.232 [2024-11-20 14:48:40.136886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.232 [2024-11-20 14:48:40.137039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.137046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.137052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.137058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 [2024-11-20 14:48:40.148724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.149327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.232 [2024-11-20 14:48:40.149358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.232 [2024-11-20 14:48:40.149367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.232 [2024-11-20 14:48:40.149537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.232 [2024-11-20 14:48:40.149689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.149697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.149703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.149709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 [2024-11-20 14:48:40.161372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.161806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.232 [2024-11-20 14:48:40.161837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.232 [2024-11-20 14:48:40.161846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.232 [2024-11-20 14:48:40.162011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.232 [2024-11-20 14:48:40.162163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.162170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.162176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.162182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 [2024-11-20 14:48:40.173980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.174584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.232 [2024-11-20 14:48:40.174615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.232 [2024-11-20 14:48:40.174624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.232 [2024-11-20 14:48:40.174789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.232 [2024-11-20 14:48:40.174942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.174949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.174955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.174961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 [2024-11-20 14:48:40.186612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.187188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.232 [2024-11-20 14:48:40.187219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.232 [2024-11-20 14:48:40.187228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.232 [2024-11-20 14:48:40.187402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.232 [2024-11-20 14:48:40.187556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.187564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.187574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.187580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 [2024-11-20 14:48:40.199233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.199833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.232 [2024-11-20 14:48:40.199864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.232 [2024-11-20 14:48:40.199873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.232 [2024-11-20 14:48:40.200038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.232 [2024-11-20 14:48:40.200191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.200198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.200204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.200211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4091649 Killed "${NVMF_APP[@]}" "$@" 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4093344 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4093344 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4093344 ']' 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.232 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.232 [2024-11-20 14:48:40.211859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.212409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.232 [2024-11-20 14:48:40.212441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.232 [2024-11-20 14:48:40.212452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.232 [2024-11-20 14:48:40.212622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.232 [2024-11-20 14:48:40.212780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.232 [2024-11-20 14:48:40.212787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.232 [2024-11-20 14:48:40.212793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.232 [2024-11-20 14:48:40.212799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.232 [2024-11-20 14:48:40.224451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.232 [2024-11-20 14:48:40.224954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.233 [2024-11-20 14:48:40.224970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.233 [2024-11-20 14:48:40.224976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.233 [2024-11-20 14:48:40.225126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.233 [2024-11-20 14:48:40.225281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.233 [2024-11-20 14:48:40.225289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.233 [2024-11-20 14:48:40.225294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.233 [2024-11-20 14:48:40.225299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.233 [2024-11-20 14:48:40.237078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.233 [2024-11-20 14:48:40.237567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.233 [2024-11-20 14:48:40.237581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.233 [2024-11-20 14:48:40.237587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.233 [2024-11-20 14:48:40.237736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.233 [2024-11-20 14:48:40.237886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.233 [2024-11-20 14:48:40.237894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.233 [2024-11-20 14:48:40.237899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.233 [2024-11-20 14:48:40.237905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.233 [2024-11-20 14:48:40.246181] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:33.233 [2024-11-20 14:48:40.246229] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.233 [2024-11-20 14:48:40.249684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.233 [2024-11-20 14:48:40.250171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.233 [2024-11-20 14:48:40.250185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.233 [2024-11-20 14:48:40.250192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.233 [2024-11-20 14:48:40.250344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.233 [2024-11-20 14:48:40.250500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.233 [2024-11-20 14:48:40.250508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.233 [2024-11-20 14:48:40.250513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.233 [2024-11-20 14:48:40.250519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.233 [2024-11-20 14:48:40.262313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.233 [2024-11-20 14:48:40.262843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.233 [2024-11-20 14:48:40.262856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.233 [2024-11-20 14:48:40.262863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.233 [2024-11-20 14:48:40.263012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.233 [2024-11-20 14:48:40.263163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.233 [2024-11-20 14:48:40.263171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.233 [2024-11-20 14:48:40.263176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.233 [2024-11-20 14:48:40.263182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.233 [2024-11-20 14:48:40.274972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.233 [2024-11-20 14:48:40.275534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.233 [2024-11-20 14:48:40.275566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.233 [2024-11-20 14:48:40.275576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.233 [2024-11-20 14:48:40.275741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.233 [2024-11-20 14:48:40.275895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.233 [2024-11-20 14:48:40.275903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.233 [2024-11-20 14:48:40.275909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.233 [2024-11-20 14:48:40.275915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.233 [2024-11-20 14:48:40.287593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.233 [2024-11-20 14:48:40.288170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.233 [2024-11-20 14:48:40.288201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.233 [2024-11-20 14:48:40.288211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.233 [2024-11-20 14:48:40.288385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.233 [2024-11-20 14:48:40.288539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.233 [2024-11-20 14:48:40.288546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.233 [2024-11-20 14:48:40.288558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.233 [2024-11-20 14:48:40.288564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.495 [2024-11-20 14:48:40.300202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.495 [2024-11-20 14:48:40.300574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-20 14:48:40.300590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.495 [2024-11-20 14:48:40.300596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.495 [2024-11-20 14:48:40.300747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.495 [2024-11-20 14:48:40.300897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.495 [2024-11-20 14:48:40.300904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.495 [2024-11-20 14:48:40.300909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.495 [2024-11-20 14:48:40.300915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.495 [2024-11-20 14:48:40.312924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.495 [2024-11-20 14:48:40.313598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-20 14:48:40.313629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.495 [2024-11-20 14:48:40.313638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.495 [2024-11-20 14:48:40.313806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.495 [2024-11-20 14:48:40.313960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.495 [2024-11-20 14:48:40.313967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.495 [2024-11-20 14:48:40.313973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.495 [2024-11-20 14:48:40.313979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.495 [2024-11-20 14:48:40.318503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:33.495 [2024-11-20 14:48:40.325629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.495 [2024-11-20 14:48:40.326100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-20 14:48:40.326116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.495 [2024-11-20 14:48:40.326122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.495 [2024-11-20 14:48:40.326277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.495 [2024-11-20 14:48:40.326429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.495 [2024-11-20 14:48:40.326437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.495 [2024-11-20 14:48:40.326443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.495 [2024-11-20 14:48:40.326449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.495 [2024-11-20 14:48:40.338255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.495 [2024-11-20 14:48:40.338833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-20 14:48:40.338866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.495 [2024-11-20 14:48:40.338876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.495 [2024-11-20 14:48:40.339042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.495 [2024-11-20 14:48:40.339197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.495 [2024-11-20 14:48:40.339205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.495 [2024-11-20 14:48:40.339211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.495 [2024-11-20 14:48:40.339217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.496 [2024-11-20 14:48:40.347798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.496 [2024-11-20 14:48:40.347822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.496 [2024-11-20 14:48:40.347828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.496 [2024-11-20 14:48:40.347834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.496 [2024-11-20 14:48:40.347840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.496 [2024-11-20 14:48:40.348941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.496 [2024-11-20 14:48:40.349097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.496 [2024-11-20 14:48:40.349099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.496 [2024-11-20 14:48:40.350867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.496 [2024-11-20 14:48:40.351551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-20 14:48:40.351583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.496 [2024-11-20 14:48:40.351593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.496 [2024-11-20 14:48:40.351760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.496 [2024-11-20 14:48:40.351914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.496 [2024-11-20 14:48:40.351921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.496 [2024-11-20 14:48:40.351928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.496 [2024-11-20 14:48:40.351934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.496 [2024-11-20 14:48:40.363462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.496 [2024-11-20 14:48:40.364001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-20 14:48:40.364016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.496 [2024-11-20 14:48:40.364023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.496 [2024-11-20 14:48:40.364173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.496 [2024-11-20 14:48:40.364333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.496 [2024-11-20 14:48:40.364340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.496 [2024-11-20 14:48:40.364346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.496 [2024-11-20 14:48:40.364351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.496 [2024-11-20 14:48:40.376141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.496 [2024-11-20 14:48:40.376483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-20 14:48:40.376498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.496 [2024-11-20 14:48:40.376504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.496 [2024-11-20 14:48:40.376654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.496 [2024-11-20 14:48:40.376804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.496 [2024-11-20 14:48:40.376811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.496 [2024-11-20 14:48:40.376816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.496 [2024-11-20 14:48:40.376821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.496 [2024-11-20 14:48:40.388747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.496 [2024-11-20 14:48:40.389343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-20 14:48:40.389379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.496 [2024-11-20 14:48:40.389388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.496 [2024-11-20 14:48:40.389559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.496 [2024-11-20 14:48:40.389713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.496 [2024-11-20 14:48:40.389720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.496 [2024-11-20 14:48:40.389726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.496 [2024-11-20 14:48:40.389733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.496 [2024-11-20 14:48:40.401390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.496 [2024-11-20 14:48:40.401987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-20 14:48:40.402019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.496 [2024-11-20 14:48:40.402028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.496 [2024-11-20 14:48:40.402194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.496 [2024-11-20 14:48:40.402355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.496 [2024-11-20 14:48:40.402363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.496 [2024-11-20 14:48:40.402376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.496 [2024-11-20 14:48:40.402382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.496 [2024-11-20 14:48:40.414015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.496 [2024-11-20 14:48:40.414613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-20 14:48:40.414644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.496 [2024-11-20 14:48:40.414653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.496 [2024-11-20 14:48:40.414821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.496 [2024-11-20 14:48:40.414974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.496 [2024-11-20 14:48:40.414983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.496 [2024-11-20 14:48:40.414988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.496 [2024-11-20 14:48:40.414994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.496 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.496 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:33.496 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.496 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.496 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.496 [2024-11-20 14:48:40.426643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.496 [2024-11-20 14:48:40.427218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-20 14:48:40.427255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.496 [2024-11-20 14:48:40.427265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.496 [2024-11-20 14:48:40.427434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.496 [2024-11-20 14:48:40.427587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.496 [2024-11-20 14:48:40.427594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.496 [2024-11-20 14:48:40.427600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.496 [2024-11-20 14:48:40.427606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.496 [2024-11-20 14:48:40.439270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.496 [2024-11-20 14:48:40.439849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-20 14:48:40.439881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.496 [2024-11-20 14:48:40.439890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.496 [2024-11-20 14:48:40.440055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.496 [2024-11-20 14:48:40.440209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.496 [2024-11-20 14:48:40.440220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.496 [2024-11-20 14:48:40.440227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.496 [2024-11-20 14:48:40.440233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.496 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.496 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.496 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.496 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.496 [2024-11-20 14:48:40.451889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.496 [2024-11-20 14:48:40.452366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.496 [2024-11-20 14:48:40.452472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-20 14:48:40.452503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.497 [2024-11-20 14:48:40.452512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.497 [2024-11-20 14:48:40.452678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.497 [2024-11-20 14:48:40.452831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.497 [2024-11-20 14:48:40.452839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.497 [2024-11-20 14:48:40.452845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.497 [2024-11-20 14:48:40.452850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.497 [2024-11-20 14:48:40.464507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.497 [2024-11-20 14:48:40.465028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-20 14:48:40.465044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.497 [2024-11-20 14:48:40.465049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.497 [2024-11-20 14:48:40.465199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.497 [2024-11-20 14:48:40.465355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.497 [2024-11-20 14:48:40.465363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.497 [2024-11-20 14:48:40.465368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.497 [2024-11-20 14:48:40.465373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.497 [2024-11-20 14:48:40.477148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.497 [2024-11-20 14:48:40.477856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-20 14:48:40.477892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.497 [2024-11-20 14:48:40.477901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.497 [2024-11-20 14:48:40.478067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.497 [2024-11-20 14:48:40.478219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.497 [2024-11-20 14:48:40.478227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.497 [2024-11-20 14:48:40.478233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.497 [2024-11-20 14:48:40.478239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.497 Malloc0 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:33.497 [2024-11-20 14:48:40.489801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.497 [2024-11-20 14:48:40.490282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-20 14:48:40.490313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22576a0 with addr=10.0.0.2, port=4420 00:28:33.497 [2024-11-20 14:48:40.490322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22576a0 is same with the state(6) to be set 00:28:33.497 [2024-11-20 14:48:40.490489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22576a0 (9): Bad file descriptor 00:28:33.497 [2024-11-20 14:48:40.490642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.497 [2024-11-20 14:48:40.490649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.497 [2024-11-20 14:48:40.490655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.497 [2024-11-20 14:48:40.490661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.497 [2024-11-20 14:48:40.500972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.497 [2024-11-20 14:48:40.502450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.497 14:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4092013 00:28:33.757 [2024-11-20 14:48:40.612166] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:34.696 5192.17 IOPS, 20.28 MiB/s [2024-11-20T13:48:42.693Z] 6313.14 IOPS, 24.66 MiB/s [2024-11-20T13:48:44.072Z] 7148.62 IOPS, 27.92 MiB/s [2024-11-20T13:48:45.012Z] 7788.67 IOPS, 30.42 MiB/s [2024-11-20T13:48:45.951Z] 8318.50 IOPS, 32.49 MiB/s [2024-11-20T13:48:46.889Z] 8744.00 IOPS, 34.16 MiB/s [2024-11-20T13:48:47.830Z] 9107.83 IOPS, 35.58 MiB/s [2024-11-20T13:48:48.769Z] 9407.92 IOPS, 36.75 MiB/s [2024-11-20T13:48:49.708Z] 9659.29 IOPS, 37.73 MiB/s [2024-11-20T13:48:49.708Z] 9878.47 IOPS, 38.59 MiB/s 00:28:42.648 Latency(us) 00:28:42.648 [2024-11-20T13:48:49.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.648 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:42.648 Verification LBA range: start 0x0 length 0x4000 00:28:42.648 Nvme1n1 : 15.01 9878.82 38.59 12241.60 0.00 5768.48 549.55 16165.55 00:28:42.648 [2024-11-20T13:48:49.708Z] =================================================================================================================== 00:28:42.648 [2024-11-20T13:48:49.708Z] Total : 9878.82 38.59 12241.60 0.00 5768.48 549.55 16165.55 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:42.908 rmmod nvme_tcp 00:28:42.908 rmmod nvme_fabrics 00:28:42.908 rmmod nvme_keyring 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 4093344 ']' 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 4093344 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 4093344 ']' 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 4093344 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4093344 00:28:42.908 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:42.909 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:42.909 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4093344' 00:28:42.909 killing process with pid 4093344 00:28:42.909 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 4093344 00:28:42.909 14:48:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 4093344 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.168 14:48:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.079 14:48:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.079 00:28:45.079 real 0m25.209s 00:28:45.079 user 1m0.150s 00:28:45.079 sys 0m5.771s 00:28:45.079 14:48:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.079 14:48:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:45.079 ************************************ 00:28:45.079 END TEST nvmf_bdevperf 00:28:45.079 ************************************ 00:28:45.079 14:48:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:45.079 14:48:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:45.079 14:48:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.079 14:48:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.079 ************************************ 00:28:45.079 START TEST nvmf_target_disconnect 00:28:45.079 ************************************ 00:28:45.079 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:45.446 * Looking for test storage... 00:28:45.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:45.446 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:45.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.447 --rc genhtml_branch_coverage=1 00:28:45.447 --rc genhtml_function_coverage=1 00:28:45.447 --rc genhtml_legend=1 00:28:45.447 --rc geninfo_all_blocks=1 00:28:45.447 --rc geninfo_unexecuted_blocks=1 00:28:45.447 00:28:45.447 ' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:45.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.447 --rc genhtml_branch_coverage=1 00:28:45.447 --rc genhtml_function_coverage=1 00:28:45.447 --rc genhtml_legend=1 00:28:45.447 --rc geninfo_all_blocks=1 00:28:45.447 --rc geninfo_unexecuted_blocks=1 00:28:45.447 00:28:45.447 ' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:45.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.447 --rc genhtml_branch_coverage=1 00:28:45.447 --rc genhtml_function_coverage=1 00:28:45.447 --rc genhtml_legend=1 00:28:45.447 --rc geninfo_all_blocks=1 00:28:45.447 --rc geninfo_unexecuted_blocks=1 00:28:45.447 00:28:45.447 ' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:45.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.447 --rc genhtml_branch_coverage=1 00:28:45.447 --rc genhtml_function_coverage=1 00:28:45.447 --rc genhtml_legend=1 00:28:45.447 --rc geninfo_all_blocks=1 00:28:45.447 --rc geninfo_unexecuted_blocks=1 00:28:45.447 00:28:45.447 ' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:45.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:45.447 14:48:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:50.733 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:50.733 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:50.733 Found net devices under 0000:31:00.0: cvl_0_0 00:28:50.733 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:50.734 Found net devices under 0000:31:00.1: cvl_0_1 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:28:50.734 00:28:50.734 --- 10.0.0.2 ping statistics --- 00:28:50.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.734 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:28:50.734 00:28:50.734 --- 10.0.0.1 ping statistics --- 00:28:50.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.734 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:50.734 ************************************ 00:28:50.734 START TEST nvmf_target_disconnect_tc1 00:28:50.734 ************************************ 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.734 [2024-11-20 14:48:57.751980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.734 [2024-11-20 14:48:57.752026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72cf0 with addr=10.0.0.2, port=4420 00:28:50.734 [2024-11-20 14:48:57.752044] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:50.734 [2024-11-20 14:48:57.752058] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:50.734 [2024-11-20 14:48:57.752064] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:50.734 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:50.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:50.734 Initializing NVMe Controllers 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:50.734 00:28:50.734 real 0m0.104s 00:28:50.734 user 0m0.045s 00:28:50.734 sys 0m0.058s 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.734 ************************************ 00:28:50.734 END TEST nvmf_target_disconnect_tc1 00:28:50.734 ************************************ 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.734 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:50.994 ************************************ 00:28:50.994 START TEST nvmf_target_disconnect_tc2 00:28:50.994 ************************************ 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4099736 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4099736 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4099736 ']' 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.994 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.995 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.995 14:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.995 [2024-11-20 14:48:57.849684] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:50.995 [2024-11-20 14:48:57.849730] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.995 [2024-11-20 14:48:57.934366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.995 [2024-11-20 14:48:57.971112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.995 [2024-11-20 14:48:57.971143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.995 [2024-11-20 14:48:57.971151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.995 [2024-11-20 14:48:57.971158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.995 [2024-11-20 14:48:57.971164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.995 [2024-11-20 14:48:57.972774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:50.995 [2024-11-20 14:48:57.972923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:50.995 [2024-11-20 14:48:57.973065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:50.995 [2024-11-20 14:48:57.973066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.933 Malloc0 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.933 [2024-11-20 14:48:58.693277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.933 [2024-11-20 14:48:58.721596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4100086 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:51.933 14:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.851 14:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4099736 00:28:53.851 14:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Write completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.851 Read completed with error (sct=0, sc=8) 00:28:53.851 starting I/O failed 00:28:53.852 Write completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Write completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Write completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Read completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Read completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Write completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Read completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Write completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Write completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Write completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Read completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Write completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Read completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Read completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Read completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Read completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 Write completed with error (sct=0, sc=8) 00:28:53.852 starting I/O failed 00:28:53.852 [2024-11-20 14:49:00.749079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:53.852 [2024-11-20 14:49:00.749568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.749605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.749840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.749849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.750019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.750029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.750487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.750515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.750699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.750717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.751061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.751070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.751254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.751263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.751503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.751511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.751844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.751852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.752033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.752042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.752283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.752292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.752626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.752635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.752935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.752944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.753270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.753279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.753616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.753625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.753958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.753967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.754289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.754298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.754608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.754616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.754957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.754965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.755349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.755358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.755542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.755551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.755708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.755716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.755995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.756003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.756290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.756298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.756660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.756668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.756799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.756808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.757086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.757094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.757401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.757410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.852 [2024-11-20 14:49:00.757755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.852 [2024-11-20 14:49:00.757763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.852 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.758020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.758028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.758361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.758370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.758674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.758682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.758872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.758880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.759173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.759182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.759368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.759377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.759685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.759693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.759982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.759990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.760290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.760300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.760572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.760580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.760885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.760894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.761204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.761213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.761621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.761630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.761926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.761934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.762316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.762325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.762633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.762642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.762937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.762945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.763237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.763256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.763563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.763571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.763917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.763925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.764079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.764087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.764379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.764387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.764657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.764666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.764892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.764900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.765053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.765061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.765355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.765364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.765657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.765664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.765946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.765954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.766253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.766261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.766543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.766551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.766827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.766835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.853 [2024-11-20 14:49:00.767122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.853 [2024-11-20 14:49:00.767129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.853 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.767303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.767312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.767526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.767535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.767696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.767704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.767834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.767843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.768100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.768109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.768425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.768433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.768572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.768580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.768879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.768886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.769188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.769196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.769334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.769342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.769542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.769550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.769834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.769842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.770136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.770144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.770452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.770461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.770751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.770760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.771092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.771101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.771403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.771411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.771709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.771717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.772028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.772036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.772354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.772362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.772662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.772670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.772955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.772963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.773255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.773264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.773588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.773598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.773879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.773887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.774186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.774194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.774373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.774382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.774559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.854 [2024-11-20 14:49:00.774567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.854 qpair failed and we were unable to recover it. 00:28:53.854 [2024-11-20 14:49:00.774741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.774749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.775010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.775018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.775377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.775385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.775672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.775680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.775966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.775973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.776259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.776267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.776433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.776442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.776717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.776725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.776998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.777005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.777284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.777292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.777606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.777614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.777889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.777896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.778191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.778199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.778491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.778500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.778792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.778800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.779087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.779095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.779392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.779400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.779692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.779701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.779992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.780001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.780315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.780323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.780625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.780633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.780938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.780946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.781246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.781254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.781552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.781560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.781861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.781870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.782157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.782164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.782460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.782469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.855 [2024-11-20 14:49:00.782747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.855 [2024-11-20 14:49:00.782755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.855 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.782910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.782919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.783223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.783231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.783574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.783583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.783730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.783739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.783986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.783994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.784287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.784295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.784570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.784578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.784876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.784886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.785166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.785174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.785355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.785364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.785679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.785686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.785983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.785991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.786295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.786303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.786612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.786620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.786769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.786778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.787116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.787125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.787412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.787420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.787760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.787768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.788062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.788070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.788362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.788370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.788672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.788680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.788959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.788967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.789242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.789258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.789535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.789544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.789821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.789829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.856 qpair failed and we were unable to recover it. 00:28:53.856 [2024-11-20 14:49:00.790123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.856 [2024-11-20 14:49:00.790131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.790451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.790460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.790769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.790777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.791067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.791075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.791365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.791373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.791664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.791672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.791956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.791964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.792262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.792270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.792569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.792577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.792862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.792870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.793158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.793165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.793464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.793473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.793707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.793715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.794016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.794024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.794297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.794306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.794609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.794617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.794903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.794912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.795063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.795071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.795371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.795379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.795679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.795687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.796007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.796015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.796306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.796314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.796496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.796507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.796672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.796681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.796974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.857 [2024-11-20 14:49:00.796982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.857 qpair failed and we were unable to recover it. 00:28:53.857 [2024-11-20 14:49:00.797259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.797268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.797457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.797465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.797726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.797734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.798026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.798034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.798384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.798392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.798646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.798654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.798949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.798957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.799139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.799147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.799400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.799409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.799627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.799636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.799913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.799922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.800206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.800214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.800541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.800549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.800906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.800914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.801195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.801203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.801398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.801406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.801683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.801691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.801901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.801910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.802071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.802080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.802374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.802382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.802685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.802693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.802971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.802979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.803274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.803283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.803664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.803672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.803839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.803848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.804126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.804134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.804459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.804467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.858 [2024-11-20 14:49:00.804808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.858 [2024-11-20 14:49:00.804817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.858 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.805099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.805108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.805402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.805410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.805708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.805715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.806050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.806059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.806383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.806392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.806691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.806700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.806866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.806875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.807014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.807022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.807355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.807362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.807668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.807680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.807838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.807847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.808186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.808194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.808474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.808482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.808766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.808774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.809054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.809062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.809350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.809358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.809701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.809708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.810004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.810012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.810304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.810312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.810644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.810653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.810813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.810822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.811142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.811151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.811441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.811449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.811708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.811716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.812004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.812013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.812317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.812325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.812643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.812651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.812932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.812940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.859 [2024-11-20 14:49:00.813237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.859 [2024-11-20 14:49:00.813248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.859 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.813558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.813566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.813872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.813880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.814171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.814179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.814466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.814475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.814751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.814759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.814963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.814971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.815277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.815285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.815508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.815516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.815819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.815827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.815977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.815986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.816158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.816166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.816347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.816356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.816617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.816624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.816908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.816916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.817209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.817217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.817513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.817522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.817811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.817819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.818113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.818121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.818435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.818443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.818750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.818758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.819040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.819051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.819227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.819235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.819552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.819561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.819845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.819854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.820124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.820132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.860 qpair failed and we were unable to recover it. 00:28:53.860 [2024-11-20 14:49:00.820406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.860 [2024-11-20 14:49:00.820415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.820708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.820716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.821005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.821013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.821325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.821334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.821620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.821628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.821916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.821924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.822218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.822226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.822529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.822537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.822818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.822826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.823112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.823120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.823318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.823326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.823512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.823520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.823810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.823818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.824108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.824116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.824416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.824424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.824732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.824740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.825060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.825068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.825259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.825268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.825574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.825583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.825904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.825912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.826275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.826283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.826586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.826595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.826882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.826890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.827033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.827042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.827223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.827231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.861 [2024-11-20 14:49:00.827622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.861 [2024-11-20 14:49:00.827630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.861 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.827938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.827946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.828237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.828252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.828545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.828554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.828842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.828850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.829137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.829145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.829449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.829457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.829740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.829748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.829942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.829949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.830260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.830268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.830418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.830428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.830639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.830647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.830919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.830926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.831091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.831100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.831387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.831395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.831692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.831700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.831990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.831998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.832277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.832285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.832590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.832598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.832786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.832794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.833137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.833145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.833450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.833458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.833748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.833756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.834034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.834042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.834337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.834345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.834638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.834646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.834929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.834937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.835233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.835241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.835409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.835418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.835725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.862 [2024-11-20 14:49:00.835734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.862 qpair failed and we were unable to recover it. 00:28:53.862 [2024-11-20 14:49:00.836024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.836032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.836325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.836333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.836645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.836653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.836931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.836939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.837102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.837110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.837417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.837426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.837711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.837718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.838005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.838013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.838319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.838328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.838611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.838620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.838784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.838792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.839093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.839101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.839412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.839420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.839715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.839724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.840041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.840048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.840336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.840344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.840629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.840637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.840940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.840948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.841231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.841239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.841519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.841527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.841852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.841862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.842149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.842157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.842471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.842479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.863 qpair failed and we were unable to recover it. 00:28:53.863 [2024-11-20 14:49:00.842773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.863 [2024-11-20 14:49:00.842781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.843067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.843075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.843225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.843234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.843415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.843423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.843717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.843725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.844013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.844021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.844316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.844324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.844630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.844638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.844785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.844794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.845095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.845103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.845414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.845423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.845721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.845729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.846023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.846031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.846236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.846246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.846432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.846440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.846741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.846749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.847034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.847042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.847350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.847358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.847664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.847672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.847958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.847966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.848264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.848272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.848569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.848577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.848791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.848799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.849106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.849114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.849394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.849402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.849687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.849695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.849971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.849979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.850267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.850275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.850575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.850583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.850862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.850869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.864 [2024-11-20 14:49:00.851167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.864 [2024-11-20 14:49:00.851175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.864 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.851473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.851482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.851876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.851884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.852179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.852188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.852483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.852491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.852884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.852892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.853188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.853196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.853491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.853499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.853797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.853805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.854085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.854093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.854381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.854389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.854704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.854713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.854988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.854996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.855307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.855315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.855629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.855637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.855924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.855932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.856226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.856234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.856532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.856541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.856839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.856847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.857150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.857158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.857535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.857544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.857744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.857752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.857939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.857948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.858204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.858213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.858530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.858538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.858826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.858834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.859127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.859135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.859435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.859443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.859622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.859631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.865 [2024-11-20 14:49:00.859965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.865 [2024-11-20 14:49:00.859973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.865 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.860271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.860280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.860597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.860605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.860765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.860773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.861218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.861225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.861518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.861528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.861849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.861857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.862093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.862101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.862256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.862264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.862533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.862541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.862824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.862832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.863039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.863047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.863351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.863359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.863669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.863678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.864002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.864010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.864307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.864315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.864621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.864629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.864930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.864939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.865117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.865125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.865405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.865414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.865727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.865735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.866031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.866040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.866364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.866373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.866686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.866694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.867006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.867014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.867308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.867316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.867619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.867627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.867923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.867930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.868225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.868233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.868533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.868541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.868844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.866 [2024-11-20 14:49:00.868853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.866 qpair failed and we were unable to recover it. 00:28:53.866 [2024-11-20 14:49:00.869140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.869148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.869446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.869455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.869751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.869760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.870048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.870056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.870353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.870361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.870653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.870662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.870954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.870961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.871326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.871334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.871627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.871636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.871929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.871937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.872222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.872230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.872516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.872525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.872835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.872844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.873138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.873146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.873463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.873473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.873815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.873823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.874168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.874176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.874487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.874495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.874812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.874821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.875125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.875133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.875446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.875455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.875777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.875786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.876072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.876081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.876387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.876396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.876680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.876688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.876988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.876996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.877285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.877293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.877599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.877607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.877887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.877895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.878179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.878187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.878497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.867 [2024-11-20 14:49:00.878506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.867 qpair failed and we were unable to recover it. 00:28:53.867 [2024-11-20 14:49:00.878797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.878805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.879090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.879098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.879407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.879415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.879724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.879732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.880030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.880038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.880331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.880339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.880661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.880669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.880992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.881000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.881153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.881161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.881359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.881368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.881670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.881678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.881977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.881985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.882282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.882290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.882589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.882597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.882877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.882885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.883172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.883180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.883461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.883470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.883771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.883779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.884069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.884077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.884355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.884363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.884653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.884661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.884950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.884958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.885235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.885243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.885520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.885529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.885815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.885822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.886004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.886012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.886295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.886304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.886615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.886622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.886920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.886928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.887227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.887235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.887543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.887551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.887841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.887849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.888137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.868 [2024-11-20 14:49:00.888145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.868 qpair failed and we were unable to recover it. 00:28:53.868 [2024-11-20 14:49:00.888458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.888466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.888754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.888763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.889044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.889052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.889348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.889357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.889683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.889691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.889982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.889989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.890285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.890293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.890665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.890673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.890983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.890991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.891282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.891290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.891583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.891590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.891871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.891880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.892042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.892051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.892365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.892373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.892580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.892588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.892853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.892861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.893152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.893160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.893321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.893330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.893626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.893634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.893816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.893825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.894117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.894125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.894427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.894436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.894722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.894731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.895019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.895027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.895306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.895313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.895605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.869 [2024-11-20 14:49:00.895613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.869 qpair failed and we were unable to recover it. 00:28:53.869 [2024-11-20 14:49:00.895807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.895814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.896115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.896123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.896435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.896443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.896630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.896638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.896947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.896957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.897267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.897275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.897584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.897592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.897888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.897895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.898182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.898190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.898468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.898476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.898769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.898777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.899057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.899066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.899364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.899373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.899684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.899693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.899986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.899994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.900348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.900357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.900653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.900661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.900837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.900846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.901160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.901168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.901460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.901468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.901757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.901764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:53.870 [2024-11-20 14:49:00.902064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.870 [2024-11-20 14:49:00.902072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:53.870 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.902362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.902372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.902657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.902666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.902957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.902965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.903260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.903268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.903548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.903557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.903850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.903858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.904146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.904154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.904357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.904365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.904669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.904678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.904972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.904980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.905124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.905133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.905390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.905398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.905667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.905675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.905961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.905969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.906259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.906268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.906534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.906542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.906831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.906839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.907118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.907126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.907409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.907417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.907709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.907717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.908006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.908014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.908308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.146 [2024-11-20 14:49:00.908316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.146 qpair failed and we were unable to recover it. 00:28:54.146 [2024-11-20 14:49:00.908530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.908540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.908856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.908864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.909160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.909168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.909339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.909347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.909663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.909671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.909965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.909973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.910124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.910132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.910455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.910463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.910754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.910762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.911053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.911061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.911348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.911356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.911664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.911672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.911963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.911971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.912271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.912279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.912561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.912569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.912865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.912873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.913165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.913173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.913478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.913486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.913768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.913776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.913963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.913970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.914271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.914280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.147 [2024-11-20 14:49:00.914610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.147 [2024-11-20 14:49:00.914618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.147 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.914911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.914918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.915209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.915217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.915556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.915564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.915884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.915893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.916178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.916186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.916489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.916498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.916777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.916785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.917136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.917144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.917463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.917471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.917749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.917757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.918047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.918055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.918342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.918350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.918540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.918548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.918853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.918861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.919167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.919176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.919490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.919499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.919788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.919796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.919949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.919959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.920149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.920159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.920464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.920472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.920775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.920783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.921068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.921076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.921380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.921388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.921681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.921688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.921998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.922006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.922303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.922312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.922613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.922621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.148 qpair failed and we were unable to recover it. 00:28:54.148 [2024-11-20 14:49:00.922902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.148 [2024-11-20 14:49:00.922909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.923200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.923209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.923383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.923391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.923710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.923718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.924002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.924010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.924305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.924314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.924612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.924620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.924912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.924920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.925192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.925200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.925497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.925506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.925799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.925807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.926146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.926154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.926471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.926480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.926778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.926786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.927077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.927084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.927372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.927380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.927687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.927695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.928006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.928014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.928304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.928312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.928605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.928613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.928895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.928903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.929190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.929198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.929470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.929479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.929760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.929768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.149 qpair failed and we were unable to recover it. 00:28:54.149 [2024-11-20 14:49:00.930048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.149 [2024-11-20 14:49:00.930056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.930348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.930357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.930662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.930670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.930851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.930859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.931146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.931154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.931318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.931326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.931609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.931617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.931926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.931935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.932232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.932240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.932552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.932560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.932849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.932856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.933134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.933143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.933461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.933469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.933762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.933770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.934051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.934058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.934356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.934364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.934675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.934683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.934993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.935001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.935277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.935285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.935606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.935614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.935906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.935914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.936180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.936189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.936483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.936491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.936796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.936804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.937102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.937109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.937401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.937409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.937699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.937707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.938006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.150 [2024-11-20 14:49:00.938014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.150 qpair failed and we were unable to recover it. 00:28:54.150 [2024-11-20 14:49:00.938298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.938306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.938621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.938629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.938922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.938930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.939226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.939234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.939584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.939593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.939888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.939896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.940192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.940200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.940502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.940510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.940804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.940812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.941110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.941118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.941405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.941413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.941750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.941758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.942050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.942058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.942359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.942368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.942672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.942680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.942847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.942856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.943159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.943167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.943450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.943459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.943742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.943750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.944036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.944046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.944351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.944359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.944659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.944668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.944817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.944825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.945079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.945087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.945378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.945386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.945678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.945686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.945977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.945985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.946266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.946274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.946501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.946509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.151 qpair failed and we were unable to recover it. 00:28:54.151 [2024-11-20 14:49:00.946816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.151 [2024-11-20 14:49:00.946824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.947114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.947122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.947421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.947430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.947730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.947738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.948031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.948039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.948324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.948333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.948606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.948615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.948922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.948930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.949219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.949226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.949524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.949532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.949815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.949823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.950115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.950123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.950412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.950420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.950733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.950741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.951060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.951068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.951357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.951366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.951682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.951691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.951980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.951988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.952327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.952335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.952654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.952662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.952957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.952965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.953251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.953259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.953542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.953550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.953867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.953875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.954046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.954055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.954375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.152 [2024-11-20 14:49:00.954384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.152 qpair failed and we were unable to recover it. 00:28:54.152 [2024-11-20 14:49:00.954688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.954696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.954977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.954985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.955262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.955271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.955584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.955592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.955875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.955885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.956173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.956181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.956483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.956491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.956769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.956777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.957072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.957080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.957357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.957366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.957648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.957657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.957817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.957826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.958140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.958148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.958454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.958462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.958759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.958767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.959052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.959060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.959361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.959369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.959666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.959675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.960011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.960019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.960310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.960318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.960622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.960631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.960786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.960794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.961117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.961127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.961406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.961414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.961695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.961702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.962005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.962013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.962299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.962308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.962607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.962615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.962935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.962943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.963258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.963265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.153 [2024-11-20 14:49:00.963545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.153 [2024-11-20 14:49:00.963553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.153 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.963832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.963840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.964133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.964141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.964434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.964443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.964749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.964757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.965095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.965104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.965254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.965263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.965543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.965551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.965831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.965839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.966135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.966142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.966429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.966437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.966731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.966740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.967020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.967028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.967347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.967355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.967652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.967662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.967957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.967966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.968259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.968267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.968529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.968537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.968838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.968846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.969145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.969153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.969451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.969460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.969758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.969766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.969923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.969932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.970259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.970268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.970565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.970573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.970862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.970870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.971035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.971043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.971300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.971308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.971613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.971621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.971909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.154 [2024-11-20 14:49:00.971917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.154 qpair failed and we were unable to recover it. 00:28:54.154 [2024-11-20 14:49:00.972209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.972217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.972575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.972584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.972881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.972889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.973179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.973187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.973484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.973492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.973793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.973801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.974091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.974099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.974406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.974415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.974693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.974701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.975006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.975014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.975206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.975214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.975304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.975312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.975596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.975605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.975800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.975808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.976075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.976083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.976398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.976407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.976719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.976726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.976950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.976958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.977231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.977239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.977570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.977578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.977865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.977873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.978195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.978204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.978500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.978509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.978810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.978818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.978969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.978979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.979283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.979292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.979574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.979583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.979890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.979899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.155 qpair failed and we were unable to recover it. 00:28:54.155 [2024-11-20 14:49:00.980798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.155 [2024-11-20 14:49:00.980818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.981106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.981115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.981412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.981421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.981722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.981731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.982015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.982024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.982308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.982318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.982627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.982635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.982984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.982993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.983282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.983291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.983661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.983669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.983974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.983983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.984157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.984166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.984315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.984325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.984643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.984651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.984944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.984953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.985242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.985254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.985560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.985569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.985867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.985876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.986167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.986175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.986470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.986479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.986787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.986796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.987103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.987111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.987450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.987458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.988211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.988230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.988533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.988543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.988735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.988744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.989031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.989040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.989318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.989327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.989623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.989631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.989915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.989924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.990213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.990222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.990533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.990542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.991013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.991025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.991348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.991357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.991676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.156 [2024-11-20 14:49:00.991684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.156 qpair failed and we were unable to recover it. 00:28:54.156 [2024-11-20 14:49:00.991845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.991853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.992018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.992029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.992305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.992314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.992622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.992630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.992937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.992945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.993251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.993260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.993527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.993535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.993850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.993859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.994138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.994147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.994436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.994445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.994634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.994642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.994903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.994911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.995215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.995223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.995519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.995527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.995840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.995849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.996185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.996193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.996492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.996501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.996798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.996806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.996986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.996995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.997288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.997296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.997650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.997658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.997971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.997980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.998269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.998278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.998591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.998600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.998943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.998951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.999260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.999268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.999567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.999576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:00.999776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:00.999784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:01.000089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:01.000098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:01.000399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:01.000409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:01.000581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:01.000589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:01.000911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:01.000920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.157 [2024-11-20 14:49:01.001200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.157 [2024-11-20 14:49:01.001207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.157 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.001525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.001535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.001827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.001836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.002125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.002134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.002435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.002444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.002779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.002788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.003075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.003083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.003469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.003478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.003810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.003819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.003975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.003985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.004154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.004163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.004483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.004492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.004782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.004791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.005103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.005112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.005423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.005432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.005738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.005746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.005937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.005947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.006189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.006197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.006316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.006326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.006518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.006527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.006803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.006811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.007098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.007107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.007487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.007496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.008040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.008057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.008227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.008238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.008543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.008552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.008853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.008862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.009171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.009180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.158 qpair failed and we were unable to recover it. 00:28:54.158 [2024-11-20 14:49:01.009445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.158 [2024-11-20 14:49:01.009454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.009635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.009643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.009763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.009773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.010039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.010048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.010217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.010225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.010405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.010413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.010479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.010486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.010731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.010740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.011025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.011036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.011313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.011323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.011607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.011616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.011921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.011929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.012158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.012167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.012477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.012485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.012678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.012686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.012997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.013005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.013324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.013333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.013687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.013695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.014024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.014032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.014312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.014321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.014666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.014676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.014812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.014821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.015132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.015142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.015431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.015440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.015591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.015600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.015773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.015782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.016104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.016112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.016417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.016425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.016725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.016734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.017064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.017072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.159 [2024-11-20 14:49:01.017391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.159 [2024-11-20 14:49:01.017400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.159 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.017717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.017725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.018079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.018088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.018387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.018396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.018682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.018691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.019005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.019013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.019192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.019200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.019533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.019543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.019866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.019875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.020180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.020188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.020465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.020474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.020745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.020754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.020832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.020840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.021131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.021140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.021442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.021451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.021732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.021741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.022072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.022080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.022397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.022406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.022712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.022722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.022914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.022922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.023207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.023215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.023278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.023285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.023622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.023630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.023914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.023923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.024167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.024175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.024496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.024505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.024801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.024809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.025096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.025104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.025321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.025331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.025608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.025618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.025898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.025907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.026087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.160 [2024-11-20 14:49:01.026095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.160 qpair failed and we were unable to recover it. 00:28:54.160 [2024-11-20 14:49:01.026285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.026293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.026569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.026577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.026856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.026865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.027227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.027235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.027472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.027481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.027784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.027792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.028101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.028110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.028434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.028442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.028612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.028620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.028816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.028825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.029101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.029110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.029458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.029467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.029794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.029802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.030105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.030114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.030456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.030465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.030650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.030658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.030944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.030953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.031102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.031110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.031435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.031444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.031752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.031761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.031936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.031944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.032241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.032259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.032632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.032642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.032711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.032718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.032886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.032896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.033208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.033216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.033502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.033512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.033803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.033811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.034088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.034097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.034296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.034304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.034594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.034602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.034899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.161 [2024-11-20 14:49:01.034907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.161 qpair failed and we were unable to recover it. 00:28:54.161 [2024-11-20 14:49:01.035208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.035217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.035550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.035559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.035848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.035856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.036133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.036142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.036350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.036359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.036672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.036680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.036968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.036976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.037131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.037140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.037455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.037464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.037755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.037763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.038063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.038071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.038333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.038342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.038624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.038632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.038914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.038922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.039209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.039218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.039480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.039488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.039653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.039661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.039971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.039979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.040260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.040269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.040563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.040572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.040846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.040854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.041041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.041048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.041381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.041390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.041648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.041656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.042001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.042009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.042175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.042184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.042483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.042492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.042789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.042797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.043092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.043100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.043386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.043395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.043691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.043699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.044014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.044023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.162 qpair failed and we were unable to recover it. 00:28:54.162 [2024-11-20 14:49:01.044173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.162 [2024-11-20 14:49:01.044181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.044336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.044345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.044645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.044655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.044957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.044966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.045267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.045276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.045619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.045628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.045937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.045945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.046256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.046265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.046383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.046390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.046719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.046727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.047133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.047141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.047308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.047316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.047596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.047605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.047932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.047940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.048236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.048248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.048546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.048554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.048917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.048926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.049221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.049228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.049407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.049416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.049714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.049722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.050007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.050015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.050357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.050366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.050718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.050727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.051020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.051029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.051336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.163 [2024-11-20 14:49:01.051345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.163 qpair failed and we were unable to recover it. 00:28:54.163 [2024-11-20 14:49:01.051540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.051549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.051825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.051832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.052114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.052122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.052430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.052440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.052716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.052724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.052899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.052907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.053195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.053204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.053246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.053255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.053587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.053596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.053782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.053790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.054074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.054082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.054405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.054413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.054718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.054727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.055012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.055021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.055183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.055191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.055515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.055524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.055830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.055838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.056112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.056122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.056476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.056484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.056777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.056784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.057055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.057063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.057383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.057391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.057553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.057562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.057838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.057846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.058192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.058201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.058484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.058492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.058703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.058711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.058892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.058902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.059102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.059111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.059416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.059424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.059735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.059743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.060049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.164 [2024-11-20 14:49:01.060057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.164 qpair failed and we were unable to recover it. 00:28:54.164 [2024-11-20 14:49:01.060450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.060458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.060761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.060769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.060977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.060985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.061249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.061258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.061559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.061567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.061740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.061748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.062080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.062088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.062402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.062411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.062711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.062719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.063015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.063023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.063317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.063326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.063593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.063602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.063882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.063890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.064033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.064041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.064267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.064275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.064549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.064557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.064806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.064815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.065089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.065097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.065263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.065271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.065399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.065407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.065560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.065569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.065847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.065856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.066155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.066165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.066320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.066328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.066503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.066511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.066787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.066797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.067080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.067088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.067371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.067379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.067680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.067689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.067974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.067982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.068306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.068314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.165 [2024-11-20 14:49:01.068583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.165 [2024-11-20 14:49:01.068591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.165 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.068901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.068910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.069175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.069184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.069369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.069377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.069691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.069700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.069983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.069991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.070256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.070264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.070564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.070572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.070731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.070740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.071009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.071017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.071302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.071311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.071625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.071633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.071944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.071952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.072226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.072234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.072426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.072434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.072707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.072716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.073007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.073015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.073312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.073320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.073623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.073631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.073915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.073923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.074046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.074056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.074302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.074312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.074612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.074620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.074900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.074908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.075230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.075238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.075395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.075403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.075705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.075713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.075870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.075879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.076180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.076189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.076498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.076506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.076772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.076780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.166 [2024-11-20 14:49:01.077048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.166 [2024-11-20 14:49:01.077056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.166 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.077362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.077370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.077675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.077683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.077967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.077978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.078254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.078262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.078568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.078576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.078866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.078874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.079165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.079173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.079424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.079432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.079726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.079734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.080035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.080043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.080331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.080339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.080683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.080691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.081016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.081024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.081310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.081319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.081619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.081628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.081921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.081929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.082205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.082214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.082523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.082531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.082860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.082868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.082989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.082997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.083285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.083293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.083562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.083570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.083874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.083883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.084188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.084196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.084517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.084526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.084844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.084853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.085157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.085165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.085437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.085445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.085615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.085623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.086011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.086019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.086192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.086201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.167 [2024-11-20 14:49:01.086498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.167 [2024-11-20 14:49:01.086507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.167 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.086853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.086862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.087141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.087149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.087419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.087428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.087723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.087731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.088081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.088089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.088370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.088379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.088687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.088696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.088916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.088924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.089121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.089129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.089452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.089461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.089751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.089761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.090043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.090051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.090349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.090357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.090633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.090640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.090952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.090961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.091281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.091289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.091677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.091685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.091888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.091896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.092235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.092247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.092536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.092545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.092832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.092840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.093012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.093021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.093284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.093291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.093616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.093624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.093790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.093799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.094109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.094117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.094397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.094405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.094581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.094590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.094901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.094909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.095199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.095207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.095506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.168 [2024-11-20 14:49:01.095514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.168 qpair failed and we were unable to recover it. 00:28:54.168 [2024-11-20 14:49:01.095823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.095831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.096145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.096154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.096463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.096472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.096769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.096777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.097060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.097069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.097353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.097362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.097665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.097673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.097960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.097968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.098255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.098264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.098566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.098573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.098876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.098884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.099161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.099169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.099451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.099460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.099750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.099759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.100099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.100107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.100396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.100405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.100716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.100724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.100935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.100943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.101260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.101268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.101446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.101457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.101784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.101792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.102070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.102078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.102369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.102378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.102720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.102728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.103026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.103035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.103306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.169 [2024-11-20 14:49:01.103315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.169 qpair failed and we were unable to recover it. 00:28:54.169 [2024-11-20 14:49:01.103601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.103609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.103907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.103915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.104195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.104204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.104530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.104539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.104830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.104839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.105138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.105146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.105456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.105464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.105747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.105755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.106090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.106098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.106377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.106386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.106741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.106749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.107052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.107061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.107359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.107367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.107681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.107690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.107978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.107987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.108287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.108296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.108607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.108615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.108892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.108900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.109200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.109208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.109516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.109524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.109883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.109892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.110067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.110076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.110408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.110417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.110729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.110738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.111056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.111065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.111359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.111367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.111676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.111684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.111961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.111970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.112254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.112264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.112886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.112904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.113225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.113233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.113526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.113534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.170 [2024-11-20 14:49:01.113838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.170 [2024-11-20 14:49:01.113846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.170 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.114124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.114134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.114426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.114435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.114722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.114731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.115028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.115037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.115324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.115333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.115620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.115629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.115932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.115941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.116222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.116230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.116411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.116420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.116712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.116720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.116961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.116969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.117162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.117170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.117472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.117480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.117789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.117797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.118081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.118089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.118297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.118307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.118597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.118605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.118901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.118910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.119187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.119195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.119507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.119516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.119800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.119808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.120097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.120106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.120293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.120301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.120506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.120513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.120655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.120663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.120970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.120978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.121257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.121265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.121543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.121551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.121852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.121861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.171 qpair failed and we were unable to recover it. 00:28:54.171 [2024-11-20 14:49:01.122147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.171 [2024-11-20 14:49:01.122156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.122488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.122497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.122785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.122793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.123105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.123113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.123396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.123404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.123706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.123714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.124003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.124011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.124293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.124302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.124612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.124620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.124908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.124916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.125204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.125214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.125528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.125538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.125809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.125818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.126095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.126104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.126384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.126393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.126694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.126703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.126988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.126996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.127294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.127303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.127467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.127475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.127768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.127776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.127936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.127944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.128133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.128142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.128415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.128423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.128727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.128735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.129020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.129028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.129314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.129323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.129646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.129655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.129946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.129953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.130254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.130262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.130567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.130576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.130863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.130871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.131140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.131147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.131446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.131454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.172 [2024-11-20 14:49:01.131656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.172 [2024-11-20 14:49:01.131665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.172 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.131986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.131994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.132281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.132290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.132567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.132575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.132882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.132890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.133174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.133183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.133423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.133432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.133727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.133735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.134015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.134024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.134195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.134203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.134517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.134526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.134880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.134889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.135183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.135191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.135501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.135509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.135714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.135722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.135997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.136005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.136196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.136204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.136499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.136508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.136815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.136825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.137115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.137122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.137415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.137423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.137723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.137732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.137908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.137916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.138217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.138225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.138523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.138533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.138853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.138860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.139148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.139156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.139469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.139477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.139767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.173 [2024-11-20 14:49:01.139774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.173 qpair failed and we were unable to recover it. 00:28:54.173 [2024-11-20 14:49:01.140084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.140094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.140379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.140387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.140656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.140664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.140967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.140975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.141263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.141271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.141620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.141628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.141902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.141910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.142195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.142204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.142482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.142490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.142866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.142874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.143145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.143152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.143454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.143463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.143791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.143799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.144094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.144102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.144399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.144407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.144711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.144719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.144995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.145003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.145289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.145299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.145621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.145629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.145936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.145944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.146289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.146298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.146594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.146602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.146929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.146937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.147230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.147238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.147559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.147567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.147855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.147863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.148143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.148152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.174 [2024-11-20 14:49:01.148457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.174 [2024-11-20 14:49:01.148467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.174 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.148782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.148791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.149073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.149083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.149380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.149389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.149682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.149691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.149991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.149999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.150285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.150294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.150574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.150582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.150861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.150870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.151159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.151167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.151435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.151443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.151772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.151781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.152078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.152085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.152407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.152416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.152700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.152708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.152990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.152998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.153304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.153313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.153627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.153635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.153831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.153839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.154139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.154148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.154455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.154464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.154740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.154748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.155060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.155068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.155360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.155369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.155648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.155656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.155958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.155967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.156257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.156265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.156451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.156460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.156771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.156779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.175 [2024-11-20 14:49:01.157081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.175 [2024-11-20 14:49:01.157091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.175 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.157388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.157396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.157692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.157700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.157987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.157996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.158335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.158343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.158614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.158623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.158923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.158930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.159230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.159238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.159520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.159527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.159830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.159838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.160150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.160158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.160478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.160487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.160778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.160786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.161102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.161110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.161434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.161442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.161616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.161626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.161946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.161953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.162256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.162265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.162587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.162596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.162904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.162912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.163252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.163260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.163585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.163593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.163931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.163939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.164219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.164227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.164516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.164525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.164813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.164821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.165162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.165170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.165481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.165490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.165784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.165792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.166104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.166113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.166384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.166393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.166693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.176 [2024-11-20 14:49:01.166701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.176 qpair failed and we were unable to recover it. 00:28:54.176 [2024-11-20 14:49:01.166882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.166892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.167182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.167190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.167482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.167491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.167772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.167781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.168063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.168071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.168350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.168358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.168673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.168681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.169011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.169019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.169326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.169337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.169488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.169497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.169786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.169794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.170099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.170108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.170404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.170413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.170705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.170714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.171040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.171048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.171332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.171340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.171630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.171638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.171931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.171939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.172230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.172239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.172520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.172528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.172806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.172814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.173096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.173104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.173413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.173422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.173607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.173615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.173915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.173923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.174183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.174191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.174479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.174488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.174776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.174784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.175100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.175108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.175384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.175392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.175681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.175689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.175970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.175979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.177 qpair failed and we were unable to recover it. 00:28:54.177 [2024-11-20 14:49:01.176321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.177 [2024-11-20 14:49:01.176329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.176616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.176624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.176910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.176918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.177215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.177223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.177522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.177530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.177810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.177819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.178111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.178119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.178416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.178425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.178598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.178607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.178934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.178942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.179229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.179237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.179515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.179523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.179810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.179818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.180129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.180137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.180292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.180300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.180606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.180615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.180905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.180914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.181219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.181227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.181522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.181530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.181757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.181764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.182065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.182073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.182367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.182376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.182668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.182677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.183023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.183031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.183330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.183338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.183686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.183694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.183979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.183987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.184272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.184281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.184587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.184594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.184884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.184892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.185180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.185188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.185379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.185387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.185707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.185714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.178 [2024-11-20 14:49:01.185997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.178 [2024-11-20 14:49:01.186006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.178 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.186187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.186195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.186491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.186500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.186788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.186796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.187097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.187105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.187386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.187394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.187605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.187613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.187952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.187961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.188264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.188272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.188611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.188619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.188904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.188912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.189251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.189259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.189532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.189540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.189866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.189874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.190177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.190185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.190338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.190347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.190528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.190536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.190868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.190877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.191172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.191180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.179 [2024-11-20 14:49:01.191474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.179 [2024-11-20 14:49:01.191482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.179 qpair failed and we were unable to recover it. 00:28:54.454 [2024-11-20 14:49:01.192109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.454 [2024-11-20 14:49:01.192126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.454 qpair failed and we were unable to recover it. 00:28:54.454 [2024-11-20 14:49:01.192414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.454 [2024-11-20 14:49:01.192424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.454 qpair failed and we were unable to recover it. 00:28:54.454 [2024-11-20 14:49:01.192738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.454 [2024-11-20 14:49:01.192747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.454 qpair failed and we were unable to recover it. 00:28:54.454 [2024-11-20 14:49:01.193067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.454 [2024-11-20 14:49:01.193077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.454 qpair failed and we were unable to recover it. 00:28:54.454 [2024-11-20 14:49:01.193386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.454 [2024-11-20 14:49:01.193395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.454 qpair failed and we were unable to recover it. 00:28:54.454 [2024-11-20 14:49:01.193709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.454 [2024-11-20 14:49:01.193718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.454 qpair failed and we were unable to recover it. 00:28:54.454 [2024-11-20 14:49:01.194007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.454 [2024-11-20 14:49:01.194015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.454 qpair failed and we were unable to recover it. 00:28:54.454 [2024-11-20 14:49:01.194306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.454 [2024-11-20 14:49:01.194315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.454 qpair failed and we were unable to recover it. 00:28:54.454 [2024-11-20 14:49:01.194687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.454 [2024-11-20 14:49:01.194695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.454 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.194987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.194996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.195277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.195286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.195612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.195620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.195900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.195908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.196193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.196201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.196495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.196505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.196788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.196797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.197081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.197089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.197426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.197435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.197728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.197736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.198023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.198031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.198204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.198214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.198509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.198517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.198809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.198818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.199156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.199164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.199475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.199483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.199769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.199778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.199929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.199937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.200257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.200266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.200600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.200609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.200937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.200945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.201228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.201236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.201537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.201546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.201704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.201713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.202008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.202017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.202313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.202322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.202610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.202618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.202929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.202937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.203227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.203235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.203552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.203561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.203926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.203934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.204231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.204239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.204542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.204550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.204832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.204840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.205124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.205133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.205414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.205424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.205724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.205732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.455 qpair failed and we were unable to recover it. 00:28:54.455 [2024-11-20 14:49:01.206019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.455 [2024-11-20 14:49:01.206027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.206321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.206329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.206640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.206649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.206968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.206977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.207260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.207268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.207569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.207577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.207870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.207878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.208169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.208177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.208484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.208493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.208804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.208812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.209118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.209125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.209320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.209329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.209611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.209619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.209940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.209949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.210239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.210251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.210538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.210546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.210854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.210862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.211053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.211061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.211368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.211377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.211687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.211696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.211986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.211995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.212316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.212324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.212594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.212602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.212911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.212919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.213204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.213213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.213517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.213525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.213834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.213842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.214129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.214137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.214412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.214420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.214599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.214608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.214918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.214927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.215211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.215219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.215516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.215524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.215831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.215839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.216138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.216146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.216434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.216443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.216749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.216757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.217045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.217054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.456 qpair failed and we were unable to recover it. 00:28:54.456 [2024-11-20 14:49:01.217330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.456 [2024-11-20 14:49:01.217339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.217670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.217678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.217978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.217987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.218293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.218301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.218573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.218581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.218907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.218915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.219212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.219220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.219537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.219546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.219821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.219829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.220112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.220121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.220398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.220406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.220711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.220719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.221007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.221015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.221306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.221315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.221625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.221634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.221921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.221929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.222222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.222230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.222403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.222411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.222715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.222723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.223006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.223014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.223304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.223312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.223503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.223512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.223833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.223841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.224184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.224192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.224477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.224486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.224775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.224784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.225066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.225074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.225394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.225402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.225690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.225698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.226008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.226017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.226306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.226314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.226636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.226645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.226942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.226950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.227238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.227250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.227522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.227530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.227724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.227732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.228042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.228051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.228318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.457 [2024-11-20 14:49:01.228326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.457 qpair failed and we were unable to recover it. 00:28:54.457 [2024-11-20 14:49:01.228617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.228625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.228944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.228953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.229233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.229240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.229575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.229583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.229866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.229875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.230169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.230177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.230479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.230488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.230767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.230776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.231069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.231077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.231342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.231350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.231568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.231576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.231880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.231888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.232183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.232191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.232488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.232496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.232785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.232793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.233127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.233135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.233463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.233472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.233760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.233769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.234056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.234064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.234372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.234380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.234693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.234701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.234989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.234997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.235287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.235295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.235698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.235706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.236013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.236020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.236326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.236335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.236513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.236522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.236817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.236826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.237144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.237152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.237486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.237494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.237787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.237795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.238091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.238099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.238260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.238269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.238606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.238614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.238901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.238909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.239217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.239224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.239505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.239513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.239834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.458 [2024-11-20 14:49:01.239842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.458 qpair failed and we were unable to recover it. 00:28:54.458 [2024-11-20 14:49:01.240143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.240151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.240468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.240477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.240788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.240796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.241129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.241138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.241418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.241427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.241732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.241740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.242020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.242028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.242183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.242191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.242561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.242570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.242856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.242864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.243148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.243156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.243452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.243461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.243747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.243756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.244050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.244058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.244349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.244358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.244691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.244699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.244994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.245002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.245289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.245297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.245615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.245624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.245983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.245991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.246329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.246337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.246643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.246651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.246931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.246939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.247262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.247271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.247573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.247581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.247868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.247876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.248066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.248074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.248374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.248383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.248712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.248721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.249062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.249070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.249364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.249372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.459 [2024-11-20 14:49:01.249675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.459 [2024-11-20 14:49:01.249683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.459 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.249976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.249984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.250276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.250285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.250591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.250600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.251069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.251085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.251311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.251321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.251612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.251620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.251896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.251904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.252222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.252230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.252531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.252539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.252817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.252825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.253112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.253120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.253340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.253351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.253646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.253655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.253958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.253966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.254132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.254140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.254474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.254483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.254763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.254771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.255106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.255114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.255403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.255412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.255704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.255712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.256118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.256126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.256423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.256431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.256759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.256767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.256955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.256963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.257297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.257306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.257607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.257616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.257912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.257921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.258302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.258310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.258579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.258588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.258894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.258902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.259211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.259219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.259533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.259541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.259815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.259823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.260136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.260144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.260500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.260509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.260685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.260694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.261001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.261009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.460 [2024-11-20 14:49:01.261178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.460 [2024-11-20 14:49:01.261186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.460 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.261482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.261490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.261785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.261793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.262006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.262014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.262339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.262347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.262652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.262660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.262904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.262912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.263250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.263259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.263684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.263692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.263937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.263944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.264111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.264120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.264410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.264419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.264764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.264772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.264969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.264977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.265278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.265289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.265687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.265695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.265991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.266000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.266180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.266189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.266407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.266416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.266734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.266742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.267034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.267043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.267361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.267370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.267666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.267674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.267977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.267986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.268029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.268039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.268349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.268358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.268638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.268647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.268805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.268812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.269126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.269134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.269456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.269464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.269764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.269772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.270079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.270087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.270392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.270400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.270673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.270682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.270980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.270989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.271136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.271144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.271524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.271533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.271877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.271885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.272171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.461 [2024-11-20 14:49:01.272179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.461 qpair failed and we were unable to recover it. 00:28:54.461 [2024-11-20 14:49:01.272415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.462 [2024-11-20 14:49:01.272423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.462 qpair failed and we were unable to recover it. 00:28:54.462 [2024-11-20 14:49:01.272598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.462 [2024-11-20 14:49:01.272606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.462 qpair failed and we were unable to recover it. 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 [2024-11-20 14:49:01.273352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.462 [2024-11-20 14:49:01.273453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506020 is same with the state(6) to be set 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 [2024-11-20 14:49:01.274364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Write completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 Read completed with error (sct=0, sc=8) 00:28:54.462 starting I/O failed 00:28:54.462 [2024-11-20 14:49:01.274629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.462 [2024-11-20 14:49:01.274913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.462 [2024-11-20 14:49:01.274923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.462 qpair failed and we were unable to recover it. 00:28:54.462 [2024-11-20 14:49:01.275092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.462 [2024-11-20 14:49:01.275100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.462 qpair failed and we were unable to recover it. 00:28:54.462 [2024-11-20 14:49:01.275439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.462 [2024-11-20 14:49:01.275447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.462 qpair failed and we were unable to recover it. 00:28:54.462 [2024-11-20 14:49:01.275765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.275774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.276031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.276038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.276294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.276302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.276638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.276646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.276942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.276950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.277184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.277194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.277539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.277550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.277814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.277825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.278134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.278145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.278487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.278498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.278821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.278833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.279187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.279197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.279516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.279527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.279829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.279840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.280006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.280016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.280355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.280366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.280677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.280688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.280999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.281009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.281265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.281276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.281628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.281639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.281840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.281849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.282125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.282133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.282467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.282476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.282751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.282760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.283072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.283080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.283377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.283386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.283672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.283680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.283957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.283965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.284249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.284258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.284610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.284620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.284768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.284776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.285049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.285058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.285229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.285237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.285542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.463 [2024-11-20 14:49:01.285551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.463 qpair failed and we were unable to recover it. 00:28:54.463 [2024-11-20 14:49:01.285846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.285854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.286097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.286105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.286431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.286440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.286760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.286768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.287053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.287061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.287241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.287253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.287634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.287642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.287972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.287980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.288326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.288334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.288603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.288612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.288907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.288915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.289233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.289242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.289582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.289591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.289904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.289913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.290197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.290205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.290504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.290513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.290834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.290843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.291130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.291138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.291448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.291456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.291765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.291774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.291913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.291920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.292226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.292234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.292579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.292588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.292929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.292937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.293066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.293074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.293436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.293445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.293730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.293738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.294037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.294045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.294259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.294268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.294578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.294586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.294748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.294756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.294964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.294973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.295149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.295157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.295443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.295452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.295758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.295766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.295991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.464 [2024-11-20 14:49:01.296000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.464 qpair failed and we were unable to recover it. 00:28:54.464 [2024-11-20 14:49:01.296283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.296293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.296591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.296600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.296889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.296897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.297180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.297189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.297516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.297524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.297798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.297807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.297986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.297995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.298258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.298267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.298654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.298663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.298958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.298966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.299252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.299260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.299434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.299442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.299760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.299769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.300094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.300103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.300412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.300420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.300729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.300737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.301024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.301032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.301308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.301316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.301615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.301623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.301919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.301928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.302088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.302096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.302400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.302409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.302684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.302692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.302847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.302856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.303024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.303031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.303315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.303324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.303620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.303629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.303817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.303825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.304105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.304113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.304460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.304469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.304796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.304805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.305115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.305123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.305426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.305435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.305726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.305735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.306041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.306049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.306319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.306328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.306655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.306663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.306970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.465 [2024-11-20 14:49:01.306978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.465 qpair failed and we were unable to recover it. 00:28:54.465 [2024-11-20 14:49:01.307285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.307294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.307609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.307620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.307808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.307816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.308130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.308138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.308482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.308491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.308648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.308657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.308824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.308832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.309030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.309038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.309203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.309211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.309612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.309620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.309902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.309910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.310207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.310215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.310524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.310533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.310763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.310771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.310983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.310991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.311138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.311147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.311544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.311553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.311882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.311890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.312063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.312071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.312393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.312402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.312578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.312587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.312909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.312918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.313222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.313230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.313529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.313537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.313855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.313864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.314034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.314043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.314295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.314304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.314582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.314590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.314733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.314741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.314933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.314942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.315261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.315269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.315570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.315579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.315858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.315866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.316044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.316052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.316406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.316414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.316633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.316641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.316932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.316940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.317238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.466 [2024-11-20 14:49:01.317253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.466 qpair failed and we were unable to recover it. 00:28:54.466 [2024-11-20 14:49:01.317546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.317554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.317859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.317868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.317995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.318004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.318297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.318307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.318579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.318587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.318856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.318864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.319161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.319169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.319474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.319482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.319769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.319777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.320034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.320042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.320231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.320240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.320566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.320576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.320785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.320794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.321080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.321088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.321426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.321434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.321739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.321756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.322017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.322024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.322297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.322305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.322484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.322493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.322673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.322681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.322950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.322958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.323242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.323254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.323538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.323546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.323833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.323841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.324001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.324008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.324326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.324334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.324593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.324601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.324850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.324859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.325142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.325150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.325312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.325322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.325523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.325532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.325849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.325858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.326156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.326163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.326531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.326540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.326863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.326872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.327170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.327179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.327369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.327377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.327582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.467 [2024-11-20 14:49:01.327590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.467 qpair failed and we were unable to recover it. 00:28:54.467 [2024-11-20 14:49:01.327675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.327683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.327930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.327939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.328115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.328123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.328477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.328485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.328698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.328706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.329026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.329035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.329388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.329396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.329707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.329715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.329956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.329964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.330143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.330150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.330490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.330498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.330769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.330777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.331070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.331078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.331337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.331346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.331495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.331502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.331780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.331788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.332069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.332077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.332267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.332276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.332508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.332515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.332788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.332796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.332943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.332952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.333144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.333153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.333219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.333225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.333545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.333553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.333865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.333873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.334163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.334171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.334492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.334499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.334637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.334644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.334953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.334963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.335318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.335327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.335719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.335727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.335855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.335862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.336158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.336166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.336494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.468 [2024-11-20 14:49:01.336503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.468 qpair failed and we were unable to recover it. 00:28:54.468 [2024-11-20 14:49:01.336817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.336824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.336997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.337006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.337171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.337178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.337402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.337413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.337696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.337704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.338011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.338019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.338321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.338328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.338643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.338651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.338981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.338988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.339150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.339157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.339499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.339506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.339817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.339826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.340133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.340141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.340470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.340478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.340783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.340790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.341079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.341086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.341272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.341279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.341584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.341591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.341836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.341844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.342018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.342026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.342264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.342272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.342579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.342586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.342861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.342868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.343018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.343026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.343268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.343275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.343558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.343565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.343872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.343880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.344231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.344238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.344405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.344413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.344693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.344700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.344985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.344993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.345159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.345166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.345476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.345484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.345737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.345744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.346018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.346026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.346180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.346187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.346492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.346500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.346776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.469 [2024-11-20 14:49:01.346784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.469 qpair failed and we were unable to recover it. 00:28:54.469 [2024-11-20 14:49:01.346976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.346983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.347288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.347295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.347629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.347636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.347916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.347923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.348238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.348249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.348519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.348526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.348685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.348692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.349112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.349119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.349418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.349426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.349774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.349782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.350092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.350099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.350310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.350318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.350596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.350603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.350880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.350889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.351198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.351206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.351546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.351554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.351846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.351853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.352165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.352173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.352386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.352394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.352671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.352678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.352976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.352983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.353186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.353193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.353491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.353499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.353665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.353673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.353973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.353980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.354316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.354324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.354664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.354671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.354974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.354981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.355276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.355284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.355611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.355619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.355898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.355905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.356202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.356209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.356378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.356386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.356653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.356660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.356850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.356857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.357023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.357031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.357224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.357231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.470 [2024-11-20 14:49:01.357526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.470 [2024-11-20 14:49:01.357533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.470 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.357833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.357841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.358132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.358139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.358423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.358430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.358731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.358738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.359043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.359050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.359262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.359269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.359557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.359565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.359867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.359875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.360175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.360182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.360513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.360521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.360807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.360814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.361096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.361103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.361425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.361433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.361718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.361726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.362006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.362013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.362205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.362215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.362482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.362490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.362840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.362847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.362887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.362893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.363192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.363200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.363414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.363421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.363711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.363718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.364008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.364015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.364308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.364316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.364620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.364627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.364977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.364985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.365287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.365294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.365597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.365603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.365972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.365979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.366275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.366282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.366467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.366476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.366758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.366765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.367057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.367064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.367219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.367226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.367416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.367424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.367729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.367736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.367897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.367904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.368205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.471 [2024-11-20 14:49:01.368212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.471 qpair failed and we were unable to recover it. 00:28:54.471 [2024-11-20 14:49:01.368521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.368529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.368819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.368825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.369128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.369134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.369411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.369419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.369741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.369749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.370415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.370431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.370756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.370763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.371058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.371065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.371357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.371365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.371680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.371688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.371881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.371889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.372054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.372061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.372378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.372385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.372708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.372715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.373023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.373030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.373344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.373351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.373666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.373674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.373962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.373971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.374141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.374148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.374441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.374449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.374755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.374762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.375069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.375076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.375382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.375390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.375753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.375759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.375907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.375914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.376257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.376265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.376599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.376606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.376784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.376792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.377074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.377082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.377374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.377381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.377682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.377688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.377992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.377999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.378295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.378301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.378510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.378517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.378800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.378807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.379091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.379098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.379288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.379296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.379583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.472 [2024-11-20 14:49:01.379589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.472 qpair failed and we were unable to recover it. 00:28:54.472 [2024-11-20 14:49:01.379780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.379786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.380111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.380119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.380413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.380420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.380624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.380631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.380964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.380971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.381308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.381315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.381604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.381612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.381929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.381936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.382281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.382288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.382593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.382599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.382789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.382795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.383119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.383127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.383412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.383419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.383768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.383775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.384084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.384091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.384395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.384402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.384758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.384766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.385049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.385056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.385358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.385365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.385673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.385682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.385865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.385872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.386201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.386209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.386508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.386516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.386796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.386803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.386965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.386972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.387259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.387266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.387567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.387574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.387892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.387900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.388177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.388184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.388486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.388493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.388711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.388718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.389047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.389053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.389218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.389224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.473 qpair failed and we were unable to recover it. 00:28:54.473 [2024-11-20 14:49:01.389549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.473 [2024-11-20 14:49:01.389557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.389936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.389943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.390234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.390241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.390528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.390536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.390835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.390842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.391138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.391144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.391463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.391471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.391778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.391784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.392070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.392076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.392358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.392366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.392723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.392731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.393040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.393047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.393341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.393348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.393541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.393548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.393863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.393870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.394186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.394194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.394535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.394542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.394851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.394857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.395184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.395191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.395486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.395493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.395842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.395850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.396186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.396193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.396496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.396504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.396794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.396801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.396954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.396962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.397257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.397264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.397567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.397575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.397847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.397854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.398151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.398157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.398451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.398458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.398760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.398768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.399066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.399073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.399365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.399373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.399678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.399686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.399993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.400000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.400335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.400342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.400501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.400508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.400851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.400858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.474 qpair failed and we were unable to recover it. 00:28:54.474 [2024-11-20 14:49:01.401142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.474 [2024-11-20 14:49:01.401149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.401459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.401466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.401752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.401759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.402055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.402062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.402360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.402367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.402691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.402698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.402981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.402988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.403287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.403294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.403642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.403650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.403800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.403807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.404132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.404139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.404413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.404420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.404730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.404737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.405030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.405038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.405205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.405212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.405547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.405554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.405722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.405729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.406040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.406047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.406326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.406333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.406401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.406408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.406740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.406747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.407066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.407072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.407382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.407389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.407535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.407543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.407888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.407894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.408069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.408078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.408342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.408349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.408682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.408688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.408976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.408983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.409153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.409160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.409473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.409482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.409797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.409804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.410089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.410095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.410396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.410403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.410715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.410722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.411022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.411029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.411320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.411328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.411625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.411632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.475 qpair failed and we were unable to recover it. 00:28:54.475 [2024-11-20 14:49:01.411921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.475 [2024-11-20 14:49:01.411928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.412228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.412235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.412552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.412560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.412854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.412861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.413160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.413167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.413506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.413513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.413765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.413771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.414074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.414080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.414372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.414379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.414691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.414698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.414995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.415002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.415298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.415306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.415611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.415617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.415902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.415909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.416091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.416099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.416403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.416410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.416743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.416749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.417033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.417041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.417328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.417335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.417656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.417663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.417946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.417952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.418250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.418258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.418533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.418540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.418873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.418879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.419180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.419187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.419225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.419231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.419533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.419540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.419848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.419854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.420029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.420036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.420396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.420403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.420700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.420707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.420995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.421002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.421290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.421297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.421592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.421598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.421889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.421895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.422188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.422195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.422543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.422551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.422862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.422868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.476 qpair failed and we were unable to recover it. 00:28:54.476 [2024-11-20 14:49:01.423154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.476 [2024-11-20 14:49:01.423161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.423368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.423375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.423679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.423686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.423995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.424002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.424353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.424360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.424681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.424689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.424984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.424992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.425293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.425300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.425587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.425594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.425907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.425914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.426202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.426209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.426517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.426524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.426873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.426879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.427172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.427178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.427544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.427552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.427850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.427857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.428145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.428152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.428472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.428479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.428791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.428798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.429094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.429105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.429414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.429421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.429583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.429591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.429910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.429917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.430203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.430209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.430397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.430404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.430748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.430756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.431040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.431047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.431340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.431348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.431524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.431531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.431885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.431892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.432231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.432238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.432511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.432518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.432822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.432829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.433115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.433122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.477 qpair failed and we were unable to recover it. 00:28:54.477 [2024-11-20 14:49:01.433312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.477 [2024-11-20 14:49:01.433319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.433632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.433639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.433962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.433969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.434274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.434282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.434468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.434475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.434816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.434823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.435120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.435127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.435428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.435435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.435631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.435638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.435937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.435945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.436236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.436243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.436532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.436539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.436826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.436832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.437105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.437111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.437407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.437414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.437758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.437766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.437971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.437978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.438279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.438286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.438552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.438559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.438869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.438876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.439182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.439189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.439493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.439501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.439808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.439815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.440105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.440112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.440309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.440316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.440547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.440555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.440901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.440908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.441204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.441211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.441366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.441374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.441649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.441657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.441962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.441969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.442258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.442265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.442551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.442559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.442886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.442894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.443199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.443206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.443474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.443481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.443749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.443755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.444074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.444081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.478 [2024-11-20 14:49:01.444383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.478 [2024-11-20 14:49:01.444391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.478 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.444698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.444705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.444989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.444996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.445290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.445297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.445613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.445621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.445934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.445942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.446318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.446325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.446486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.446492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.446853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.446860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.447168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.447175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.447521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.447528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.447651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.447658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.447924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.447931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.448259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.448267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.448319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.448326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.448497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.448506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.448735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.448742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.449023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.449030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.449335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.449342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.449658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.449664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.449862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.449869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.450191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.450199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.450495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.450502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.450802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.450809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.450991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.450998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.451263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.451270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.451540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.451547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.451912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.451921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.452101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.452108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.452275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.452282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.452471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.452786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.452793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.452989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.452996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.453273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.453281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.453620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.453628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.453934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.453941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.454230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.454237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.454548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.454555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.479 [2024-11-20 14:49:01.454875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.479 [2024-11-20 14:49:01.454883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.479 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.455177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.455184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.455355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.455363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.455520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.455528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.455842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.455849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.456178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.456184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.456482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.456490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.456791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.456799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.457083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.457090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.457429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.457437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.457767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.457774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.458102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.458110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.458272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.458280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.458596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.458603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.458801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.458809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.459119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.459126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.459493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.459500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.459883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.459890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.460073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.460081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.460287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.460294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.460674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.460681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.460950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.460957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.461258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.461265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.461442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.461449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.461758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.461766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.462069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.462076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.462376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.462383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.462699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.462706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.463042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.463049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.463341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.463350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.463638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.463644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.463934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.463941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.464258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.464265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.464583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.464589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.464886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.464894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.465072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.465079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.465355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.465362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.465642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.465649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.480 [2024-11-20 14:49:01.465967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.480 [2024-11-20 14:49:01.465974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.480 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.466273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.466280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.466579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.466586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.466936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.466943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.467097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.467104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.467376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.467383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.467688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.467694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.467999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.468007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.468302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.468309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.468497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.468504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.468797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.468803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.468990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.468996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.469272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.469279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.469461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.469469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.469804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.469811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.470097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.470104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.470407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.470415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.470710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.470717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.471014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.471021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.471324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.471332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.471627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.471634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.471999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.472007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.472256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.472263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.472554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.472560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.472848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.472855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.473047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.473235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.473242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.473531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.473538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.473845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.473853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.474137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.474144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.474459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.474466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.474753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.474761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.475141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.475148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.475459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.475467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.475749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.475756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.475999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.476005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.476391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.476398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.476667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.476674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.481 [2024-11-20 14:49:01.476974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.481 [2024-11-20 14:49:01.476981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.481 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.477349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.477356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.477673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.477679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.477968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.477975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.478282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.478289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.478605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.478611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.478908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.478915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.479249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.479257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.479554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.479561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.479831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.479838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.480202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.480209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.480513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.480520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.480821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.480828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.481196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.481202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.481534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.481542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.481841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.481848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.482136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.482143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.482456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.482464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.482753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.482760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.483075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.483083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.483380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.483387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.483678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.483685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.483994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.484000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.484369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.484376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.484662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.484669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.484915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.484923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.485239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.485250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.485416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.485424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.485695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.485702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.485994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.486001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.486301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.486309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.486601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.486608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.486845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.486851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.487194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.482 [2024-11-20 14:49:01.487204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.482 qpair failed and we were unable to recover it. 00:28:54.482 [2024-11-20 14:49:01.487394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.487402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.487722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.487729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.488015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.488022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.488383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.488390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.488719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.488725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.489037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.489045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.489356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.489363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.489681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.489687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.489984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.489990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.490303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.490310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.490497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.490503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.490801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.490808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.491096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.491103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.491393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.491400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.491702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.491709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.492061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.492068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.492346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.492354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.492665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.492672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.492958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.492966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.493161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.493168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.493521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.493527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.493819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.493826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.494122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.494129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.494469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.494477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.494716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.494723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.495016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.495023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.495326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.495334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.495653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.495660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.495970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.495977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.496263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.496270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.496619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.496626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.497014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.497021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.497313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.497321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.497622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.497629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.497923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.497930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.498009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.498016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.498203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.498210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.498552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.483 [2024-11-20 14:49:01.498559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.483 qpair failed and we were unable to recover it. 00:28:54.483 [2024-11-20 14:49:01.498849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.484 [2024-11-20 14:49:01.498856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.484 qpair failed and we were unable to recover it. 00:28:54.484 [2024-11-20 14:49:01.499148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.484 [2024-11-20 14:49:01.499156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.484 qpair failed and we were unable to recover it. 00:28:54.484 [2024-11-20 14:49:01.499451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.484 [2024-11-20 14:49:01.499458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.484 qpair failed and we were unable to recover it. 00:28:54.484 [2024-11-20 14:49:01.499701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.484 [2024-11-20 14:49:01.499708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.484 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.499908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.499917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.500214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.500222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.500522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.500532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.500824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.500831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.501142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.501149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.501468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.501475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.501831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.501839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.502134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.502141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.502460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.502467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.502752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.502759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.503068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.503075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.503361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.503369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.503676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.503683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.503976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.503983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.504296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.504303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.504753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.504759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.505082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.505089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.505378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.505385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.505694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.505701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.506008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.506015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.506347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.506354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.506690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.506697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.507019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.507027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.507318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.507325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.507506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.507514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.761 [2024-11-20 14:49:01.507811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.761 [2024-11-20 14:49:01.507817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.761 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.508107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.508114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.508370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.508378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.508674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.508681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.508963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.508970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.509136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.509143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.509402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.509410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.509743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.509750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.510033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.510039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.510361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.510369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.510520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.510528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.510790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.510797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.510970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.510978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.511353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.511361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.511512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.511519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.511818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.511826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.512008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.512016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.512330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.512337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.512614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.512621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.512932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.512939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.513258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.513265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.513437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.513444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.513699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.513706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.513951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.513959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.514123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.514131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.514464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.514471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.514804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.514811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.514883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.514889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.515075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.515081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.515512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.515519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.515831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.515838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.516133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.516140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.516441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.516448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.516754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.516761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.517051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.517058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.517225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.517232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.517297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.517304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.517599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.517605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.762 qpair failed and we were unable to recover it. 00:28:54.762 [2024-11-20 14:49:01.517784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.762 [2024-11-20 14:49:01.517791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.518171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.518179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.518446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.518454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.518759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.518766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.519162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.519169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.519470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.519477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.519886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.519893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.520084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.520091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.520396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.520404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.520696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.520702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.521000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.521008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.521356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.521364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.521625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.521632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.521845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.521852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.522054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.522062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.522249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.522257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.522580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.522586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.522911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.522918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.523208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.523215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.523432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.523439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.523768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.523775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.524081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.524089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.524277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.524285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.524633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.524641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.524939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.524946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.525235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.525242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.525538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.525546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.525725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.525731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.526087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.526094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.526415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.526423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.526706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.526713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.526869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.526876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.527197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.527206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.527299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.527308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.527563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.527571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.527877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.527884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.528162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.528169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.763 qpair failed and we were unable to recover it. 00:28:54.763 [2024-11-20 14:49:01.528495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.763 [2024-11-20 14:49:01.528502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.528784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.528791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.529096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.529102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.529496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.529503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.529804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.529812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.530118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.530126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.530452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.530460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.530595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.530603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.530954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.531029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.531512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.531577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.531960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.531970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.532335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.532342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.532638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.532645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.532802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.532809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.533086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.533093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.533423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.533430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.533767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.533775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.533946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.533955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.534268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.534275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.534577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.534584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.534759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.534767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.535051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.535058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.535230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.535237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.535684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.535691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.535847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.535854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.536165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.536173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.536462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.536470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.536760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.536768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.536949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.536956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.537232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.537239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.537538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.537545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.537865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.537872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.538126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.538132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.538419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.538427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.538686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.538694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.538851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.538858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.539202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.539209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.539458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.539465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.764 qpair failed and we were unable to recover it. 00:28:54.764 [2024-11-20 14:49:01.539733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.764 [2024-11-20 14:49:01.539739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.540052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.540059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.540359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.540366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.540764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.540770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.541066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.541072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.541361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.541368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.541603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.541611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.541888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.541895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.542204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.542212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.542521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.542529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.542820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.542827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.543015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.543021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.543336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.543343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.543644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.543651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.543800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.543807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.544081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.544088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.544448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.544456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.544776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.544782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.545076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.545083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.545400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.545412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.545659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.545666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.545956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.545962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.546294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.546301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.546588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.546595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.546871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.546878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.547171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.547178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.547352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.547359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.547680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.547686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.548010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.548016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.548286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.548294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.548594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.548601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.765 qpair failed and we were unable to recover it. 00:28:54.765 [2024-11-20 14:49:01.548902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.765 [2024-11-20 14:49:01.548908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.549220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.549227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.549509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.549516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.549790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.549797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.549993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.550000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.550310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.550317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.550601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.550608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.550968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.550975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.551344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.551351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.551640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.551647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.551930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.551936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.552248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.552255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.552553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.552559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.552839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.552846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.553023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.553030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.553304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.553311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.553639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.553646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.553833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.553840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.554136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.554142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.554321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.554329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.554657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.554664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.554972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.554978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.555308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.555315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.555622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.555628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.555955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.555962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.556157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.556164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.556484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.556491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.556816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.556822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.557150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.557157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.557451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.557459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.557834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.557841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.558218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.558225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.558526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.558533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.558830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.558836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.559084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.559090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.559523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.559530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.559692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.559699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.560087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.560094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.766 qpair failed and we were unable to recover it. 00:28:54.766 [2024-11-20 14:49:01.560383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.766 [2024-11-20 14:49:01.560390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.560685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.560692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.560996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.561003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.561297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.561304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.561610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.561617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.561820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.561827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.562149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.562156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.562391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.562398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.562778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.562785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.563081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.563088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.563390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.563397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.563777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.563784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.564124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.564131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.564414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.564421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.564573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.564580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.564903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.564910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.565189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.565195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.565490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.565499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.565788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.565795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.566105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.566112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.566407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.566414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.566728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.566735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.566919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.566926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.567280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.567287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.567627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.567634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.567939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.567946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.568233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.568239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.568542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.568549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.568747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.568753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.569091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.767 [2024-11-20 14:49:01.569097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.767 qpair failed and we were unable to recover it. 00:28:54.767 [2024-11-20 14:49:01.569367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.569374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.569584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.569591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.569917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.569924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.570255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.570262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.570591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.570598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.570925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.570932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.571084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.571092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.571392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.571399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.571654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.571661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.571966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.571973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.572280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.572287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.572590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.572597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.572787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.572794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.573188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.573195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.573346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.573353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.573691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.573698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.573981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.573987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.574264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.574271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.574578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.574584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.574895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.574902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.575078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.575085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.575423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.575430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.575741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.575748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.576062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.576069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.576361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.576368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.576665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.576672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.576972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.576978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.577286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.577294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.577619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.577626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.577926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.577933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.578236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.578243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.578559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.578565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.578874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.578881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.579211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.579218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.579548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.579555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.579848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.768 [2024-11-20 14:49:01.579855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.768 qpair failed and we were unable to recover it. 00:28:54.768 [2024-11-20 14:49:01.580162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.580169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.580458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.580465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.580754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.580761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.581095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.581102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.581408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.581415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.581703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.581709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.581911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.581917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.582229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.582235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.582579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.582586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.582873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.582880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.583174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.583181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.583481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.583487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.583782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.583788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.584098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.584104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.584289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.584296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.584595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.584602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.584779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.584786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.585070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.585077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.585370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.585377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.585663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.585670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.585860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.585869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.586155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.586161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.586448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.586455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.586747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.586754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.586923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.586930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.587109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.587117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.587291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.587298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.587581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.587588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.587884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.587891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.588202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.588209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.588493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.588500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.588865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.588873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.589075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.589081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.589240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.589250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.769 qpair failed and we were unable to recover it. 00:28:54.769 [2024-11-20 14:49:01.589437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.769 [2024-11-20 14:49:01.589444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.589756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.589762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.590038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.590045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.590334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.590341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.590638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.590644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.590959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.590966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.591253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.591260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.591552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.591559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.591870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.591877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.592159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.592166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.592332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.592340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.592678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.592685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.592973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.592979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.593345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.593352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.593718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.593725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.594016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.594023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.594338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.594346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.594523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.594531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.594791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.594798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.594947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.594954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.595179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.595186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.595489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.595496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.595803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.595810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.595983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.595989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.596174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.596181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.596529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.596536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.596712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.596718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.597014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.597020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.597219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.597226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.597536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.597543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.770 qpair failed and we were unable to recover it. 00:28:54.770 [2024-11-20 14:49:01.597868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.770 [2024-11-20 14:49:01.597876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.598172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.598179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.598490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.598497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.598806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.598813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.599139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.599146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.599422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.599429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.599717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.599723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.600038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.600047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.600399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.600406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.600697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.600703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.601006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.601013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.601319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.601326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.601628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.601635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.602009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.602016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.602208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.602215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.602404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.602412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.602729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.602735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.603063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.603070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.603453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.603460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.603804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.603811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.604086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.604093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.604368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.604375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.604676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.604683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.604886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.604892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.605242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.605252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.605534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.605541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.605799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.605806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.606103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.606110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.606302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.606309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.606497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.606504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.606818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.606825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.607178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.607185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.607485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.771 [2024-11-20 14:49:01.607491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.771 qpair failed and we were unable to recover it. 00:28:54.771 [2024-11-20 14:49:01.607835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.607842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.608022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.608029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.608325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.608332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.608621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.608628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.608914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.608920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.609219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.609226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.609525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.609532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.609859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.609866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.610154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.610161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.610339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.610347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.610652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.610659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.610954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.610961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.611260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.611267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.611555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.611562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.611949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.611959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.612285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.612292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.612619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.612626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.612954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.612961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.613284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.613291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.613614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.613621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.613926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.613933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.614212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.614219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.614520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.614528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.614733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.614740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.615036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.615043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.615392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.615399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.615679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.615685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.615970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.615977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.616338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.616346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.772 [2024-11-20 14:49:01.616641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.772 [2024-11-20 14:49:01.616648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.772 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.616818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.616825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.617107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.617114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.617277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.617285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.617565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.617572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.617751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.617758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.618035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.618042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.618200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.618207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.618538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.618545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.618849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.618856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.619162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.619168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.619520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.619527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.619861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.619868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.620021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.620029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.620306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.620313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.620617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.620624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.620919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.620926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.621226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.621233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.621576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.621583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.621870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.621876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.622171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.622177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.622462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.622469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.622755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.622762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.622943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.622951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.623270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.623277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.623581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.623589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.623882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.623889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.624194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.624201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.624564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.624571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.624887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.624894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.625201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.625208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.625503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.625510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.625720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.625726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.625925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.625931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.626169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.626176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.626483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.626490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.626777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.626784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.627070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.627077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.773 [2024-11-20 14:49:01.627374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.773 [2024-11-20 14:49:01.627382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.773 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.627683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.627690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.627870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.627877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.628251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.628259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.628539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.628546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.628678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.628685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.628956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.628963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.629156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.629164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.629460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.629467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.629752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.629758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.630059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.630066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.630400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.630407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.630698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.630705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.630990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.630997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.631298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.631306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.631594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.631601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.631762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.631769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.632069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.632077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.632370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.632377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.632689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.632695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.633016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.633023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.633311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.633318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.633598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.633605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.633911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.633917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.634211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.634218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.634543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.634550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.634844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.634851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.635155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.635164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.635475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.635482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.635789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.774 [2024-11-20 14:49:01.635796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.774 qpair failed and we were unable to recover it. 00:28:54.774 [2024-11-20 14:49:01.636081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.636088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.636230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.636236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.636522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.636529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.636836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.636842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.637141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.637147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.637503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.637510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.637802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.637809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.638002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.638009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.638205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.638212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.638507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.638514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.638797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.638804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.639119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.639126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.639435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.639442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.639734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.639741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.640064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.640070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.640428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.640435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.640773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.640780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.641065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.641072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.641239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.641249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.641559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.641566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.641737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.641745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.642044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.642051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.642384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.642391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.642734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.642741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.643033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.643039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.643311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.643318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.643643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.643649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.643840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.643847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.644171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.644178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.644493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.644500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.644800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.644806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.645106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.645112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.645428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.645435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.645728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.645734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.645889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.645896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.646206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.775 [2024-11-20 14:49:01.646213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.775 qpair failed and we were unable to recover it. 00:28:54.775 [2024-11-20 14:49:01.646510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.646518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.646820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.646829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.647181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.647188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.647487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.647494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.647785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.647792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.648085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.648091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.648379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.648386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.648595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.648602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.648865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.648872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.649178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.649184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.649476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.649483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.649763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.649771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.650101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.650108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.650280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.650288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.650680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.650686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.650965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.650972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.651120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.651127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.651409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.651416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.651700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.651707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.652020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.652027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.652309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.652316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.652639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.652646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.652964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.652971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.653258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.653265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.653571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.653577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.653869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.653876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.654180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.654187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.654477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.654485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.654802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.654809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.655091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.655098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.655264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.655271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.655568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.776 [2024-11-20 14:49:01.655575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.776 qpair failed and we were unable to recover it. 00:28:54.776 [2024-11-20 14:49:01.655864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.655871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.656173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.656180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.656518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.656525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.656813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.656820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.657012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.657018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.657224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.657231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.657557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.657565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.657851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.657858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.658175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.658182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.658478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.658487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.658776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.658783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.659085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.659092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.659383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.659390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.659682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.659689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.659984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.659991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.660298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.660305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.660615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.660621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.660911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.660917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.661206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.661213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.661572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.661579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.661870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.661877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.662185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.662192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.662505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.662512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.662814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.662821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.663148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.663154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.663487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.663494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.663673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.663681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.663995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.664002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.664294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.664301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.664584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.664591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.664902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.664909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.665197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.665204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.665509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.665516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.665822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.665828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.666183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.666189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.666456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.666463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.666765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.666772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.667062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.777 [2024-11-20 14:49:01.667069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.777 qpair failed and we were unable to recover it. 00:28:54.777 [2024-11-20 14:49:01.667267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.667274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.667571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.667578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.667891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.667897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.668076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.668083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.668357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.668364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.668654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.668660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.668865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.668873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.669136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.669143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.669321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.669328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.669639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.669647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.669937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.669944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.670230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.670238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.670525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.670532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.670854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.670861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.671150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.671156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.671315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.671322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.671607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.671614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.671964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.671970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.672256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.672263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.672559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.672566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.672737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.672745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.673057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.673064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.673363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.673370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.673761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.673767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.674062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.674069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.674231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.674238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.674511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.674518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.674824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.674831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.675127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.675133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.675547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.675554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.675844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.675851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.676050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.676057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.676374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.676381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.676675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.676682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.677003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.677010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.677212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.677219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.677483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.677490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.778 qpair failed and we were unable to recover it. 00:28:54.778 [2024-11-20 14:49:01.677645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.778 [2024-11-20 14:49:01.677652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.677928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.677935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.678241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.678251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.678552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.678558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.678881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.678887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.679203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.679210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.679504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.679511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.679813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.679820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.680166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.680173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.680446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.680454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.680776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.680783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.681060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.681067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.681252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.681259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.681556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.681562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.681898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.681906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.682137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.682144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.682460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.682467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.682876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.682882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.683210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.683216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.683412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.683419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.683745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.683752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.684066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.684073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.684302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.684309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.684500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.684508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.684784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.684790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.685107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.685114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.685411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.685418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.685786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.685793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.686083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.686090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.686250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.686257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.686587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.686593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.686884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.686890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.687188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.687195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.687372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.687380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.687708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.687715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.688024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.688031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.688203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.688210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.688489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.688496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.779 [2024-11-20 14:49:01.688806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.779 [2024-11-20 14:49:01.688813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.779 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.689106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.689113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.689487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.689494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.689780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.689787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.690072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.690078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.690467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.690473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.690793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.690799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.691090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.691096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.691375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.691382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.691744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.691750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.692045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.692051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.692343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.692350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.692703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.692710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.693017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.693024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.693320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.693327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.693628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.693634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.693932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.693940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.694236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.694243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.694546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.694552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.694853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.694860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.695163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.695170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.695475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.695482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.695786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.695793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.696116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.696123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.696415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.696422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.696730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.696737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.697102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.697108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.697399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.697406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.697707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.697714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.698053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.698059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.698350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.698358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.698655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.698662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.698966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.780 [2024-11-20 14:49:01.698973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.780 qpair failed and we were unable to recover it. 00:28:54.780 [2024-11-20 14:49:01.699267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.699274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.699587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.699594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.699946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.699952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.700269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.700276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.700432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.700439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.700721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.700728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.701021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.701028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.701219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.701226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.701525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.701532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.701825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.701832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.702116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.702123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.702416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.702424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.702728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.702734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.703023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.703030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.703347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.703354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.703648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.703655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.703821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.703828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.704160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.704167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.704467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.704474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.704750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.704757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.705071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.705077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.705397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.705404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.705749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.705756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.706042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.706048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.706312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.706319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.706551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.706557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.706881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.706887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.707181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.707188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.707489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.707496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.707786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.707793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.708081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.708088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.708294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.708301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.708592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.708598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.708882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.708889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.709177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.709183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.709474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.709481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.709777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.709784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.709955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.781 [2024-11-20 14:49:01.709961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.781 qpair failed and we were unable to recover it. 00:28:54.781 [2024-11-20 14:49:01.710273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.710281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.710476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.710484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.710757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.710764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.710977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.710984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.711283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.711290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.711699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.711705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.711916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.711923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.712216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.712222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.712536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.712543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.712837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.712844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.713130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.713137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.713430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.713437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.713739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.713748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.714088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.714095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.714400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.714407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.714702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.714708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.715005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.715011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.715310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.715317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.715659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.715666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.715960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.715967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.716126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.716134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.716462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.716470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.716755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.716762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.717047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.717054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.717343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.717350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.717730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.717737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.718024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.718031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.718323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.718331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.718642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.718650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.718947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.718955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.719129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.719136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.719422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.719430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.719734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.719741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.720033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.720041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.720322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.720330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.720637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.720644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.720935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.720943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.782 [2024-11-20 14:49:01.721228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.782 [2024-11-20 14:49:01.721235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.782 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.721523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.721530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.721698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.721705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.722014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.722021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.722273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.722280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.722446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.722455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.722706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.722714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.723008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.723015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.723314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.723322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.723671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.723679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.723980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.723987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.724279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.724286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.724595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.724603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.724944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.724951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.725255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.725263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.725551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.725561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.725751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.725759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.726071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.726079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.726366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.726375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.726689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.726697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.726855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.726863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.727193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.727200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.727345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.727353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.727621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.727629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.727935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.727943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.728240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.728255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.728537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.728544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.728762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.728770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.729070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.729077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.729372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.729379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.729654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.729661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.729857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.729864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.730140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.730147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.730446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.730453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.730754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.730761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.731043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.731050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.731391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.731398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.731678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.731686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.783 [2024-11-20 14:49:01.731880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.783 [2024-11-20 14:49:01.731887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.783 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.732162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.732169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.732553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.732560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.732891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.732898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.733225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.733232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.733517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.733524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.733830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.733837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.734164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.734171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.734472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.734479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.734776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.734783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.735079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.735087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.735295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.735302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.735606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.735613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.735920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.735927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.736174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.736182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.736479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.736487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.736819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.736825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.737152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.737161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.737458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.737465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.737834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.737842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.738106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.738114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.738381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.738389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.738573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.738581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.738858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.738867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.739143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.739151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.739451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.739458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.739752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.739760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.740044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.740051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.740347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.740354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.740660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.740667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.740975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.740982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.741269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.741276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.741574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.741581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.741867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.741874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.742160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.742168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.742460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.742467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.742772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.742780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.742975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.742982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.784 qpair failed and we were unable to recover it. 00:28:54.784 [2024-11-20 14:49:01.743258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-11-20 14:49:01.743265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.743561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.743568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.743857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.743864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.744154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.744161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.744442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.744450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.744629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.744637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.744909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.744917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.745209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.745216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.745515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.745522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.745809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.745816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.746189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.746197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.746576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.746583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.746743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.746751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.747017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.747024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.747311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.747318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.747494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.747502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.747867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.747875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.748187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.748194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.748494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.748502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.748768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.748777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.749072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.749079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.749397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.749405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.749674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.749682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.750035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.750042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.750317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.750325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.750508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.750515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.750708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.750715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.751028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.751035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.751349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.751356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.751534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.751541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.751910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.751917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.752207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.752215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.752359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.752367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.785 qpair failed and we were unable to recover it. 00:28:54.785 [2024-11-20 14:49:01.752683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-11-20 14:49:01.752690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.752975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.752982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.753272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.753279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.753607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.753615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.753900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.753907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.754066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.754074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.754373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.754380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.754740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.754747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.755053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.755060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.755361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.755369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.755668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.755675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.755845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.755853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.756163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.756170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.756502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.756510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.756802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.756809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.757094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.757101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.757467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.757475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.757776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.757783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.758067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.758074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.758389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.758396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.758669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.758676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.758852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.758860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.759135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.759142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.759471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.759479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.759778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.759786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.760073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.760080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.760385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.760394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.760666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.760673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.760955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.760962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.761247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.761254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.761497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.761504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.761816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.761823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.762109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.762116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.762423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.762430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.762716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.762723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.762888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.762895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.763234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.763241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.763540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.763547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.786 qpair failed and we were unable to recover it. 00:28:54.786 [2024-11-20 14:49:01.763824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-11-20 14:49:01.763831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.764108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.764115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.764442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.764450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.764741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.764748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.765038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.765045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.765353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.765360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.765676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.765682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.765990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.765996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.766283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.766290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.766570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.766577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.766878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.766885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.767207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.767214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.767495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.767502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.767688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.767695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.767963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.767970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.768261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.768268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.768612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.768619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.768925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.768932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.769138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.769146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.769441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.769448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.769834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.769841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.770137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.770144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.770449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.770456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.770771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.770778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.771087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.771094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.771330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.771337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.771645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.771651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.771995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.772001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.772287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.772296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.772613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.772619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.772961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.772968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.773139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.773147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.773356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.773363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.773666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.773673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.773963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.773969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.774253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.774260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.774506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.774513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.774805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.774812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.787 [2024-11-20 14:49:01.774991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.787 [2024-11-20 14:49:01.774998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.787 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.775293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.775300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.775607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.775614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.775898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.775905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.776118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.776125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.776448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.776455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.776614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.776620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.776984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.776990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.777304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.777311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.777602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.777609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.777923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.777930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.778261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.778268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.778552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.778559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.778880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.778887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.779178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.779185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.779375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.779382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.779695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.779702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.780023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.780029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.780321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.780329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.780634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.780640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.781004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.781011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.781291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.781299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.781496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.781503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.781838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.781844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.782129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.782135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.782467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.782475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.782795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.782802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.783135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.783142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.783472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.783479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.783644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.783651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.783868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.783877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.784181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.784188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.784487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.784495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.784781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.784788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.785089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.785095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.785384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.785391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.785750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.785756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.786080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.786087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.788 [2024-11-20 14:49:01.786404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.788 [2024-11-20 14:49:01.786411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.788 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.786696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.786702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.787078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.787085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.787249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.787256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.787605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.787612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.787773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.787780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.788116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.788123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.788281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.788289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.788528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.788534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.788871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.788877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.789053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.789060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.789354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.789361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.789646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.789653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.789983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.789990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.790280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.790287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.790592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.790599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.790907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.790914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.791204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.791211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.791504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.791512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.791813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.791820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.791970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.791977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.792321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.792328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.792641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.792647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.792970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.792977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.793334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.793340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.793553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.793560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.793901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.793908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.794065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.794072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.794345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.794352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.794640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.794647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.794951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.794959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.795262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.795269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.795553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.795561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.789 [2024-11-20 14:49:01.795862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.789 [2024-11-20 14:49:01.795869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.789 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.796184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.796190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.796349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.796357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.796656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.796663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.796946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.796952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.797241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.797251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.797592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.797598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.797913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.797920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.798223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.798231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.798541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.798548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.798843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.798849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.799139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.799146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.799481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.799488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.799828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.799835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.800119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.800127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.800499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.800506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.800789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.800796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.801108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.801115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.801406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.801414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.801704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.801712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.801996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.802003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.802286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.802293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.802695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.802702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.803041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.803047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.803397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.803404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.803575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.803584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.803855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.803862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.804148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.804154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.804457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.804465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:54.790 [2024-11-20 14:49:01.804757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.790 [2024-11-20 14:49:01.804764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:54.790 qpair failed and we were unable to recover it. 00:28:55.070 [2024-11-20 14:49:01.805067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.070 [2024-11-20 14:49:01.805075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.070 qpair failed and we were unable to recover it. 00:28:55.070 [2024-11-20 14:49:01.805362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.070 [2024-11-20 14:49:01.805370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.070 qpair failed and we were unable to recover it. 00:28:55.070 [2024-11-20 14:49:01.805669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.070 [2024-11-20 14:49:01.805675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.805988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.805994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.806351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.806357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.806672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.806679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.806968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.806975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.807265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.807272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.807566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.807573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.807858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.807866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.808160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.808167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.808361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.808368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.808675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.808682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.808969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.808976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.809293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.809300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.809471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.809478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.809751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.809758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.810048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.810054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.810358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.810365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.810651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.810658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.810941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.810948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.811231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.811237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.811450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.811457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.811747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.811754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.812046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.812053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.812345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.812353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.812655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.812662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.813025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.813032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.813316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.813323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.813617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.813623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.813912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.813919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.814208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.814215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.814508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.814516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.814804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.814811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.815108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.815115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.815417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.815424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.815747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.815754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.815936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.815944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.816277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.816285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.816570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.071 [2024-11-20 14:49:01.816576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.071 qpair failed and we were unable to recover it. 00:28:55.071 [2024-11-20 14:49:01.816912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.816919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.817263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.817270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.817566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.817573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.817916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.817923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.818229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.818236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.818405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.818413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.818581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.818589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.818853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.818860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.819156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.819163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.819327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.819337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.819663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.819670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.819977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.819984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.820301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.820308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.820603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.820610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.820905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.820912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.821229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.821522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.821530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.821697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.821704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.821987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.821994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.822334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.822341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.822635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.822642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.822820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.822826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.823165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.823172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.823374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.823381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.823553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.823559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.823847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.823853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.824192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.824198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.824488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.824496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.824771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.824778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.825067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.825074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.825370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.825377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.825706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.825713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.826041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.826047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.826335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.826341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.826647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.826653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.826941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.826947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.827254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.827261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.072 [2024-11-20 14:49:01.827517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.072 [2024-11-20 14:49:01.827524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.072 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.827817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.827823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.828111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.828117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.828411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.828418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.828725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.828732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.829032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.829039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.829335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.829342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.829658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.829665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.829954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.829960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.830296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.830303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.830595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.830601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.830925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.830932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.831209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.831218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.831517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.831524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.831836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.831843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.832134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.832141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.832313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.832321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.832529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.832536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.832843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.832850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.833152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.833159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.833530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.833537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.833833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.833839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.834131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.834138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.834423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.834431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.834737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.834744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.835044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.835051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.835368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.835375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.835681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.835688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.835874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.835881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.836227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.836233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.836398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.836406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.836586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.836592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.836888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.836895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.837208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.837214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.837540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.837547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.837858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.837864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.838153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.838160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.838506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.838513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.838701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.073 [2024-11-20 14:49:01.838707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.073 qpair failed and we were unable to recover it. 00:28:55.073 [2024-11-20 14:49:01.839006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.839013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.839305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.839313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.839593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.839599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.839899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.839905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.840194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.840201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.840485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.840492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.840782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.840788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.841089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.841096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.841402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.841409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.841724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.841731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.842021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.842028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.842326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.842333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.842650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.842657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.842950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.842958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.843233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.843240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.843545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.843552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.843903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.843909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.844202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.844209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.844553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.844560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.844858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.844866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.845165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.845172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.845473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.845480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.845780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.845786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.846076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.846083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.846382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.846389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.846544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.846550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.846711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.846718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.846988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.846995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.847306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.847313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.847619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.847625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.847931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.847938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.848264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.848271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.848425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.848432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.848737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.848744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.849038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.849045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.849356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.849363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.849675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.074 [2024-11-20 14:49:01.849681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.074 qpair failed and we were unable to recover it. 00:28:55.074 [2024-11-20 14:49:01.849885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.849892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.850196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.850203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.850497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.850504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.850796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.850803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.851109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.851116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.851294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.851301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.851650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.851656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.851968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.851975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.852305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.852312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.852630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.852636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.852815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.852822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.853165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.853171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.853473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.853480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.853777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.853784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.853935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.853943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.854255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.854262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.854426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.854435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.854691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.854698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.855013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.855020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.855192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.855199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.855436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.855443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.855747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.855754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.856071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.856078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.856416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.856423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.856707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.856714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.857037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.857043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.857344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.857352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.857537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.857544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.857758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.857765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.858063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.858070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.858357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.858365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.858666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.858672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.075 [2024-11-20 14:49:01.858962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.075 [2024-11-20 14:49:01.858968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.075 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.859261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.859268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.859563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.859570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.859861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.859867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.860185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.860192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.860494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.860501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.860819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.860826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.861121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.861128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.861444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.861451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.861757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.861763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.862043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.862049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.862364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.862372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.862696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.862703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.863011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.863018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.863170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.863177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.863445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.863452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.863750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.863757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.864056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.864063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.864374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.864381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.864699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.864706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.865022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.865029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.865317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.865324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.865612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.865619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.865904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.865910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.866203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.866210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.866513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.866520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.866691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.866699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.866989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.866996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.867317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.867324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.867586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.867593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.867805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.867811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.868104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.868110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.868452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.868459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.868763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.868770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.869062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.869068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.869258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.869265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.869567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.869574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.869884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.869891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.076 [2024-11-20 14:49:01.870195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.076 [2024-11-20 14:49:01.870202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.076 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.870497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.870505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.870811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.870818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.871135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.871142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.871301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.871308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.871593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.871599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.871929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.871936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.872221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.872227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.872569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.872576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.872755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.872762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.872950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.872957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.873258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.873266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.873573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.873580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.873873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.873882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.874168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.874175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.874480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.874488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.874798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.874805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.875095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.875102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.875385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.875392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.875695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.875701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.875987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.875994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.876280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.876286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.876580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.876587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.876748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.876755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.877086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.877093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.877384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.877391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.877676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.877683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.877984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.877990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.878281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.878288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.878350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.878357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.878661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.878668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.878950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.878957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.879251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.879258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.879608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.879614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.879780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.879787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.880067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.880074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.880384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.880391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.880659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.880666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.077 [2024-11-20 14:49:01.880940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.077 [2024-11-20 14:49:01.880947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.077 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.881247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.881255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.881590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.881596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.881893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.881900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.882189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.882196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.882499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.882507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.882822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.882829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.883122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.883129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.883431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.883438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.883742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.883749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.884039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.884046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.884350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.884357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.884540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.884547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.884857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.884864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.885191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.885198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.885486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.885494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.885779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.885786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.886029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.886035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.886338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.886346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.886655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.886662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.886971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.886978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.887275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.887283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.887448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.887455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.887651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.887657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.887947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.887954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.888284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.888291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.888576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.888583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.888869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.888875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.889168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.889175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.889497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.889504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.889787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.889794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.890098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.890105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.890401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.890408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.890571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.890578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.890887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.890894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.891192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.891199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.891545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.891551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.891746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.891753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.078 [2024-11-20 14:49:01.891961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.078 [2024-11-20 14:49:01.891968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.078 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.892321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.892328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.892642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.892649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.892960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.892967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.893139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.893146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.893457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.893464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.893761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.893768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.894059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.894066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.894393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.894400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.894715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.894721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.895003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.895010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.895305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.895312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.895646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.895652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.895943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.895949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.896237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.896247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.896525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.896532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.896821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.896828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.897039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.897048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.897335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.897342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.897645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.897651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.897951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.897957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.898302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.898309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.898596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.898603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.898913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.898920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.899234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.899241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.899429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.899436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.899737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.899744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.899895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.899903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.900144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.900150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.900498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.900506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.900793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.900800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.901102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.901109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.901405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.901412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.901615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.901622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.901932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.901939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.902224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.902231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.902537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.902544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.902750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.902757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.079 qpair failed and we were unable to recover it. 00:28:55.079 [2024-11-20 14:49:01.902998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.079 [2024-11-20 14:49:01.903005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.903339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.903346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.903674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.903680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.904004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.904011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.904325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.904332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.904708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.904714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.905005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.905011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.905293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.905300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.905475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.905482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.905810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.905817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.906104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.906111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.906419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.906426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.906740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.906746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.907028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.907035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.907370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.907377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.907708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.907714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.907891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.907898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.908174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.908181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.908490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.908497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.908696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.908704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.909009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.909016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.909186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.909193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.909463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.909470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.909720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.909727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.910016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.910023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.910308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.910315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.910615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.910622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.910921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.910928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.911216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.911223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.911556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.911563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.911866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.911873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.912249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.912256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.912564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.912571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.912900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.080 [2024-11-20 14:49:01.912907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.080 qpair failed and we were unable to recover it. 00:28:55.080 [2024-11-20 14:49:01.913197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.913203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.913540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.913547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.913927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.913934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.914099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.914106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.914259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.914266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.914555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.914561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.914956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.914963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.915290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.915297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.915623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.915630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.915957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.915964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.916253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.916260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.916454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.916460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.916666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.916673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.916962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.916969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.917259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.917266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.917557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.917564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.917867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.917874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.918035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.918042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.918333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.918340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.918487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.918494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.918845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.918851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.919166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.919173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.919488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.919495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.919770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.919776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.920116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.920123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.920434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.920443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.920756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.920763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.920946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.920953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.921073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.921080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.921404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.921411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.921715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.921722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.921924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.921931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.922220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.922226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.922608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.922614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.922957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.922963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.923239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.923253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.923562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.923569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.923902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.081 [2024-11-20 14:49:01.923909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.081 qpair failed and we were unable to recover it. 00:28:55.081 [2024-11-20 14:49:01.924199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.924206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.924548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.924556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.924854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.924861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.925164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.925171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.925543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.925550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.925858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.925864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.926019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.926027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.926369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.926376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.926574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.926581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.926892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.926898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.927232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.927238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.927532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.927539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.927830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.927837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.928140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.928147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.928445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.928453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.928776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.928783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.929102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.929109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.929408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.929415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.929718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.929725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.930019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.930026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.930307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.930314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.930603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.930610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.930789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.930797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.931066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.931073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.931372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.931379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.931662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.931668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.931950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.931957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.932249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.932258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.932556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.932562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.932713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.932719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.932991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.932998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.933274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.933281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.933597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.933604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.933900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.933906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.934213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.934220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.934508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.934515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.934808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.934815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.935093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.082 [2024-11-20 14:49:01.935100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-11-20 14:49:01.935386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.935393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.935686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.935692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.935993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.936000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.936296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.936303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.936636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.936643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.936946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.936952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.937226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.937233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.937573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.937580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.937873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.937880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.938202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.938208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.938468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.938475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.938678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.938685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.938984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.938991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.939286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.939293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.939578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.939585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.939931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.939938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.940262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.940269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.940593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.940600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.940892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.940899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.941192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.941199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.941536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.941544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.941831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.941838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.942125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.942132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.942318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.942325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.942628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.942635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.942925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.942931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.943228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.943234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.943531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.943538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.943844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.943850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.944133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.944141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.944427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.944434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.944725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.944732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.945031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.945038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.945383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.945390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.945648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.945655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.945982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.945989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.946313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.946320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.946634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.083 [2024-11-20 14:49:01.946641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-11-20 14:49:01.946832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.946839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.947170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.947177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.947314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.947321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.947658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.947665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.947958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.947965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.948251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.948258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.948556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.948563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.948872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.948878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.949168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.949175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.949474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.949481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.949774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.949782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.949953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.949960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.950228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.950235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.950546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.950554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.950852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.950859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.951159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.951166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.951452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.951459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.951770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.951777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.952071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.952078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.952363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.952370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.952669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.952676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.952965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.952971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.953258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.953265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.953571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.953578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.953866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.953872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.954225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.954232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.954541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.954548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.954829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.954835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.955006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.955013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.955310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.955317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.955620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.955627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.955981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.955991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.956163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.956170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.956461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.084 [2024-11-20 14:49:01.956468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-11-20 14:49:01.956764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.956771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.956924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.956932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.957229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.957235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.957591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.957598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.957760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.957767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.957932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.957939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.958239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.958248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.958415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.958422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.958759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.958766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.959049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.959055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.959352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.959360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.959680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.959686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.959957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.959964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.960253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.960260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.960563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.960570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.960853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.960859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.961151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.961157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.961540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.961547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.961835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.961842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.962162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.962169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.962523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.962530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.962811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.962818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.962995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.963001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.963267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.963275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.963570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.963577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.963875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.963881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.964046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.964054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.964347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.964354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.964525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.964532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.964807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.964813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.965101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.965108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.965412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.965420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.965727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.965733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.966018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.966024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.966317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.966324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.966678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.966685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.966975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.966982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.967266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.085 [2024-11-20 14:49:01.967275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.085 qpair failed and we were unable to recover it. 00:28:55.085 [2024-11-20 14:49:01.967520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.967526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.967735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.967741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.968043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.968049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.968392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.968399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.968683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.968689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.968994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.969000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.969284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.969291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.969577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.969584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.969885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.969892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.970183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.970190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.970473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.970480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.970793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.970799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.970972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.970979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.971277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.971284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.971580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.971587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.971752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.971760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.971977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.971983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.972231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.972237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.972464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.972471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.972803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.972810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.973141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.973148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.973472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.973479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.973770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.973777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.974060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.974067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.974360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.974367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.974661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.974668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.974955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.974962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.975255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.975262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.975438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.975444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.975735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.975741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.976019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.976026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.976353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.976360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.976751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.976757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.977103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.977109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.977405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.977412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.977710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.977717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.978024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.978031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.086 qpair failed and we were unable to recover it. 00:28:55.086 [2024-11-20 14:49:01.978312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.086 [2024-11-20 14:49:01.978320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.978615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.978622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.978944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.978952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.979256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.979263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.979637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.979644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.979834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.979840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.980126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.980133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.980399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.980406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.980711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.980719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.981005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.981012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.981308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.981316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.981678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.981685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.981995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.982002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.982262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.982269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.982575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.982582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.982891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.982898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.983191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.983198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.983496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.983503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.983813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.983820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.984106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.984113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.984470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.984477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.984804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.984810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.985110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.985116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.985408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.985415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.985715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.985722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.986024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.986031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.986290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.986297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.986600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.986607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.986924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.986931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.987258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.987265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.987582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.987589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.987872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.987879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.988186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.988193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.988491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.988498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.988825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.988831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.989160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.989167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.989488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.989494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.989794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.087 [2024-11-20 14:49:01.989800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.087 qpair failed and we were unable to recover it. 00:28:55.087 [2024-11-20 14:49:01.990002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.990012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.990342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.990349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.990633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.990639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.990903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.990909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.991264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.991272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.991590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.991596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.991887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.991893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.992062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.992070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.992236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.992248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.992520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.992527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.992890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.992897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.993240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.993251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.993558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.993565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.993842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.993849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.994145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.994153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.994435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.994442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.994747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.994754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.995044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.995050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.995226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.995233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.995278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.995284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.995574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.995580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.995868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.995874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.996196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.996203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.996501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.996509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.996809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.996816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.997106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.997113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.997293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.997300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.997598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.997605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.997890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.997896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.998252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.998259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.998558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.998565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.998831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.998838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.999124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.999131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.999475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.999483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:01.999779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:01.999786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:02.000091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:02.000099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:02.000587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:02.000601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:02.000965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.088 [2024-11-20 14:49:02.000972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.088 qpair failed and we were unable to recover it. 00:28:55.088 [2024-11-20 14:49:02.001256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.001263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.001647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.001655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.001846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.001853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.002040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.002048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.002403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.002411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.002670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.002677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.002859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.002869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.003088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.003095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.003429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.003436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.003734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.003741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.004060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.004067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.004367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.004374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.004724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.004731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.004916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.004923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.005192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.005199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.005497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.005504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.005665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.005674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.005884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.005891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.006206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.006213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.006538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.006546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.006870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.006878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.007141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.007148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.007302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.007310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.007579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.007586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.007918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.007925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.008119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.008126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.008417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.008425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.008751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.008758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.009066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.009074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.009418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.009426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.009578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.009586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.010117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.010172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.010532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.010586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.089 [2024-11-20 14:49:02.010766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.089 [2024-11-20 14:49:02.010777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.089 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.010974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.010981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.011253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.011260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.011570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.011577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.011761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.011769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.012138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.012145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.012511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.012518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.012845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.012851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.013149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.013157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.013470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.013477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.013789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.013797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.014120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.014128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.014435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.014444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.014781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.014788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.015117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.015125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.015197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.015205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.015534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.015541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.015708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.015716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.016046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.016054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.016372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.016380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.016652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.016660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.016849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.016858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.017185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.017193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.017503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.017511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.017787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.017793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.017997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.018005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.018273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.018280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.018586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.018594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.018935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.018942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.019254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.019262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.019604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.019611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.019806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.019813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.020146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.020153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.020468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.020476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.020774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.020781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.021074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.021082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.021289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.021296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.021564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.021571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.090 [2024-11-20 14:49:02.021743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.090 [2024-11-20 14:49:02.021750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.090 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.021995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.022003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.022170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.022179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.022520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.022528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.022820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.022828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.023154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.023162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.023497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.023505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.023801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.023808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.024005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.024012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.024222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.024228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.024629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.024637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.024936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.024943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.025242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.025254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.025435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.025442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.025678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.025686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.025951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.025958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.026293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.026301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.026616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.026623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.026891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.026899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.027214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.027221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.027530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.027538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.027820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.027827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.028118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.028126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.028421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.028429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.028575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.028582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.028895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.028902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.029067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.029075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.029378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.029385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.029717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.029724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.030095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.030101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.030473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.030481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.030768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.030776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.031049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.031056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.031361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.031369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.031697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.031704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.031865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.031872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.032250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.032257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.032617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.032624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.032909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.091 [2024-11-20 14:49:02.032916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.091 qpair failed and we were unable to recover it. 00:28:55.091 [2024-11-20 14:49:02.033216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.033222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.033428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.033435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.033765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.033772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.034069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.034078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.034388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.034396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.034721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.034728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.035008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.035015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.035355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.035362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.035647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.035655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.035813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.035820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.036053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.036060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.036462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.036470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.036769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.036776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.037129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.037136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.037295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.037302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.037644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.037651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.037961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.037968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.038276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.038284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.038632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.038640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.038974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.038981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.039172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.039179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.039368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.039375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.039652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.039660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.039968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.039975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.040204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.040211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.040506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.040514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.040838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.040845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.041136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.041143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.041425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.041432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.041744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.041751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.042043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.042050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.042358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.042365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.042729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.042736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.043040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.043047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.043225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.043232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.043551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.043558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.043933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.043940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.044234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.092 [2024-11-20 14:49:02.044241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.092 qpair failed and we were unable to recover it. 00:28:55.092 [2024-11-20 14:49:02.044547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.044554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.044830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.044837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.045021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.045029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.045309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.045316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.045628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.045635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.045886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.045895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.046200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.046207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.046614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.046622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.046904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.046912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.047197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.047204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.047421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.047429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.047749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.047756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.048144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.048150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.048530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.048538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.048820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.048827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.049132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.049139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.049345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.049353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.049597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.049604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.049773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.049780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.050065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.050073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.050403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.050411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.050686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.050693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.051012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.051019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.051154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.051162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.051462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.051470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.051767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.051774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.051944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.051952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.052296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.052303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.052577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.052584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.052918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.052925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.053108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.053115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.053282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.053291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.053520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.053526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.053706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.093 [2024-11-20 14:49:02.053713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.093 qpair failed and we were unable to recover it. 00:28:55.093 [2024-11-20 14:49:02.054015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.054022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.054338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.054345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.054654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.054662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.054987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.054995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.055277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.055284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.055641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.055648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.055958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.055965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.056163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.056170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.056594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.056600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.056927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.056934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.057210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.057217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.057430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.057438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.057762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.057769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.058087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.058094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.058400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.058407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.058693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.058700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.059056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.059063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.059236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.059250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.059445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.059452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.059749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.059756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.060031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.060037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.060393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.060400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.060584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.060591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.060915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.060921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.061211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.061218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.061599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.061606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.061902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.061909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.062199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.062206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.062497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.062504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.062546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.062554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.062834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.062841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.063127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.063134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.063434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.063441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.063712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.063718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.064032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.064038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.064229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.064236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.064334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.064341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.064629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.094 [2024-11-20 14:49:02.064636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.094 qpair failed and we were unable to recover it. 00:28:55.094 [2024-11-20 14:49:02.065012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.065019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.065312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.065319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.065618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.065625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.065949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.065955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.066266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.066273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.066609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.066616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.066901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.066908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.067202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.067209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.067587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.067594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.067988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.067995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.068281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.068288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.068481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.068488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.068749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.068756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.069035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.069043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.069348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.069355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.069665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.069672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.069998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.070004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.070336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.070343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.070674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.070681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.070991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.070998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.071290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.071298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.071609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.071615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.071900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.071906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.072214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.072221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.072529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.072537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.072837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.072844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.073139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.073146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.073463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.073470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.073815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.073821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.074031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.074038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.074361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.074369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.074684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.074691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.074989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.074995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.075285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.075292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.075565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.075572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.075739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.075746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.076098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.076105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.076426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.095 [2024-11-20 14:49:02.076434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.095 qpair failed and we were unable to recover it. 00:28:55.095 [2024-11-20 14:49:02.076630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.076636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.076844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.076851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.077164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.077171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.077341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.077348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.077627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.077634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.078033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.078040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.078364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.078371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.078681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.078688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.078981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.078988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.079146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.079153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.079468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.079475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.079764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.079771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.080072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.080078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.080363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.080370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.080667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.080674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.081047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.081055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.081258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.081264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.081580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.081587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.081829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.081835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.082173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.082179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.082488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.082495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.082862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.082870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.083178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.083185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.083506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.083513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.083685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.083692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.083874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.083881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.084193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.084200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.084575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.084582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.084870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.084877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.085180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.085187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.085571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.085578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.085887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.085894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.086191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.086198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.086547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.086554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.086868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.086875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.087214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.087221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.087532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.087539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.087827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.087834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.088158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.088165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.088478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.088485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.088767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.096 [2024-11-20 14:49:02.088774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.096 qpair failed and we were unable to recover it. 00:28:55.096 [2024-11-20 14:49:02.089094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.089101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.089402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.089409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.089699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.089707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.089986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.089992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.090273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.090280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.090483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.090490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.090666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.090673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.090967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.090974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.091264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.091271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.091651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.091658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.091975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.091982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.092320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.092327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.092660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.092667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.092946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.092953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.093249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.093258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.093550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.093557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.093754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.093760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.094080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.094086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.094264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.094272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.094563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.094569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.094865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.094872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.095184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.095191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.095492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.095498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.095795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.095802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.096119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.096126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.096412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.096418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.096733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.096740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.096913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.096921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.097204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.097211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.097513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.097520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.097833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.097840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.098122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.098128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.098247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.098254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.098595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.098602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.098795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.098802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.099095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.099102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.099391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.099398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.099721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.099728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.100078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.100085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.100259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.100268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.100465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.100472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.097 qpair failed and we were unable to recover it. 00:28:55.097 [2024-11-20 14:49:02.100812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.097 [2024-11-20 14:49:02.100819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.101107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.101114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.101404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.101410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.101750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.101756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.102144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.102151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.102460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.102468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.102768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.102775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.103080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.103086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.103366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.103373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.103703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.103710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.103872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.103879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.104144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.104151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.104311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.104318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.104657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.104665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.104944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.104950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.105234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.105240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.105596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.105603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.105885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.105892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.106192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.106199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.106491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.106498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.106797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.106804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.106949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.106955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.107264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.107271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.107658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.107664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.107829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.107836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.108040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.108046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.108344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.108352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.108657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.108664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.108868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.108874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.109055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.109062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.109361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.109368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.109750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.109757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.110068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.110075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.110364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.110371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.110734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.110740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.111083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.098 [2024-11-20 14:49:02.111090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.098 qpair failed and we were unable to recover it. 00:28:55.098 [2024-11-20 14:49:02.111288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.099 [2024-11-20 14:49:02.111296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.375 qpair failed and we were unable to recover it. 00:28:55.375 [2024-11-20 14:49:02.111622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.375 [2024-11-20 14:49:02.111630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.375 qpair failed and we were unable to recover it. 00:28:55.375 [2024-11-20 14:49:02.111810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.375 [2024-11-20 14:49:02.111816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.375 qpair failed and we were unable to recover it. 00:28:55.375 [2024-11-20 14:49:02.112155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.375 [2024-11-20 14:49:02.112162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.375 qpair failed and we were unable to recover it. 00:28:55.375 [2024-11-20 14:49:02.112323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.375 [2024-11-20 14:49:02.112330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.112672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.112678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.112845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.112852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.113114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.113122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.113457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.113464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.113761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.113768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.113924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.113931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.114144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.114152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.114343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.114351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.114623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.114629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.114951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.114958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.115255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.115263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.115554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.115561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.115852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.115860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.116157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.116164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.116512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.116519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.116847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.116853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.117178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.117185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.117513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.117520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.117838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.117844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.118122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.118129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.118313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.118320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.118491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.118497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.118670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.118678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.118980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.118986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.119280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.119287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.119582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.119589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.119884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.119891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.120064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.120071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.120420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.120428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.120626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.120633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.120974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.120980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.121271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.121278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.121578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.121585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.121939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.121946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.122275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.122283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.122628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.122635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.122927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.376 [2024-11-20 14:49:02.122934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.376 qpair failed and we were unable to recover it. 00:28:55.376 [2024-11-20 14:49:02.123274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.123281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.123633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.123640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.123928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.123935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.124264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.124272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.124590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.124596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.124635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.124642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.124973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.124979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.125273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.125280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.125699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.125706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.125945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.125952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.126363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.126370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.126656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.126662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.126822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.126829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.127155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.127162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.127455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.127462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.127754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.127763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.128085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.128091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.128252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.128260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.128591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.128682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.128950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.128989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.129493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.129582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.129918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.129927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.130216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.130223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.130542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.130549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.130840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.130847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.131161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.131168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.131508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.131515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.131697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.131704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.132007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.132014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.132195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.132202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.132514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.132521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.132857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.132864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.133190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.133196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.133511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.133517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.133700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.133707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.134042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.134048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.134359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.134365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.377 [2024-11-20 14:49:02.134547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.377 [2024-11-20 14:49:02.134553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.377 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.134738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.134744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.135035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.135042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.135346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.135354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.135529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.135536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.135826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.135834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.136142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.136149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.136464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.136471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.136758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.136765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.137039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.137046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.137223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.137231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.137509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.137516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.137847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.137854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.138216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.138223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.138510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.138517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.138827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.138834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.139017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.139024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.139309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.139316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.139618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.139627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.139918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.139925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.140224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.140230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.140503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.140510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.140789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.140796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.141093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.141100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.141274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.141281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.141601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.141608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.141780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.141787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.142064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.142071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.142375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.142382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.142717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.142724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.143020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.143026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.143313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.143320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.143697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.143704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.143879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.143886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.144185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.144191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.144535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.144542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.144855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.144862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.145191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.145198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.378 [2024-11-20 14:49:02.145486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.378 [2024-11-20 14:49:02.145493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.378 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.145785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.145792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.146132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.146139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.146377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.146385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.146778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.146785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.147096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.147103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.147393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.147400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.147718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.147725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.148021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.148028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.148180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.148188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.148361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.148368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.148629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.148636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.148939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.148946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.149241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.149251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.149548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.149554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.149887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.149894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.150211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.150218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.150500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.150507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.150794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.150801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.151087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.151094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.151401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.151410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.151771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.151778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.152062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.152068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.152372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.152379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.152686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.152693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.152848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.152854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.153150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.153157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.153465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.153472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.153758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.153765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.154061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.154068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.154374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.154381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.154709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.154716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.155027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.155034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.155323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.155330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.155528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.155535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.155724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.155730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.156055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.156374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.379 [2024-11-20 14:49:02.156698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.379 [2024-11-20 14:49:02.156705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.379 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.157021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.157028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.157231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.157237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.157537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.157544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.157863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.157870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.158164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.158171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.158474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.158481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.158680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.158687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.158969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.158976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.159263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.159272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.159397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.159404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.159547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.159554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.159844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.159851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.160028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.160035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.160374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.160381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.160714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.160721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.160889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.160896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.161165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.161172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.161466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.161473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.161861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.161868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.162056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.162063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.162352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.162359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.162660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.162666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.162839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.162846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.163116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.163123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.163297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.163304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.163582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.163589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.163876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.163882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.164093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.164100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.164354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.164361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.164672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.164679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.164994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.380 [2024-11-20 14:49:02.165000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.380 qpair failed and we were unable to recover it. 00:28:55.380 [2024-11-20 14:49:02.165303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.165310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.165610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.165617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.165769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.165776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.166106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.166113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.166407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.166414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.166479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.166485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.166772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.166779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.166928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.166936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.167113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.167120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.167451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.167458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.167667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.167673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.167940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.167947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.168249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.168256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.168541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.168548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.168837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.168844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.169198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.169204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.169403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.169409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.169580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.169588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.169886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.169893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.170225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.170232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.170540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.170547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.170881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.170887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.171186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.171192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.171558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.171565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.171778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.171785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.172200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.172206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.172514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.172521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.172686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.172693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.172932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.172939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.173249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.173256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.173424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.173430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.173712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.173719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.174014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.174020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.174323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.174330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.174640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.174646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.175020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.175026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.175207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.175214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.175517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.175524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.381 qpair failed and we were unable to recover it. 00:28:55.381 [2024-11-20 14:49:02.175701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.381 [2024-11-20 14:49:02.175707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.175910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.175917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.176251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.176258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.176610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.176617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.176807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.176814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.176998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.177005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.177276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.177283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.177480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.177487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.177778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.177785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.178068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.178075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.178371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.178379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.178761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.178768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.179052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.179058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.179258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.179265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.179484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.179491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.179820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.179826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.180134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.180140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.180494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.180502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.180800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.180807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.181102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.181112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.181390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.181398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.181695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.181702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.181998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.182005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.182314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.182321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.182648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.182655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.182699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.182705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.183029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.183035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.183328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.183335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.183585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.183592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.183906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.183913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.184197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.184205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.184613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.184620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.184931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.184937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.185269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.185276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.185545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.185552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.185739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.185746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.186020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.186026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.186357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.186364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.382 qpair failed and we were unable to recover it. 00:28:55.382 [2024-11-20 14:49:02.186689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.382 [2024-11-20 14:49:02.186696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.186887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.186893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.187201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.187208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.187509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.187517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.187670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.187678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.187994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.188001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.188182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.188189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.188467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.188474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.188850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.188857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.189152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.189159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.189493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.189499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.189680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.189686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.189968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.189975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.190267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.190274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.190592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.190599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.190770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.190777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.191101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.191108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.191406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.191413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.191746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.191752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.192096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.192102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.192274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.192281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.192503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.192511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.192837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.192844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.193197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.193204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.193514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.193522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.193803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.193810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.194101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.194108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.194411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.194418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.194707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.194713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.194902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.194908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.195087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.195094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.195257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.195265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.195577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.195583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.195913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.195919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.196252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.196259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.196600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.196933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.196940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.197228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.383 [2024-11-20 14:49:02.197234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.383 qpair failed and we were unable to recover it. 00:28:55.383 [2024-11-20 14:49:02.197426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.197433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.197620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.197627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.197928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.197935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.198127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.198134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.198288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.198295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.198631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.198637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.198920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.198927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.199232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.199239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.199535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.199541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.199825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.199832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.200130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.200137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.200451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.200458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.200769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.200775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.201087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.201094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.201300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.201307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.201627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.201634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.201938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.201945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.202242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.202255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.202545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.202552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.202871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.202877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.203163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.203170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.203445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.203452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.203749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.203756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.203952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.203960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.204272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.204279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.204578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.204585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.204754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.204761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.205071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.205078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.205356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.205363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.205751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.205757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.206088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.206095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.206437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.206444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.206575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.206581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.206725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.206732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.207095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.207102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.207437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.207443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.207730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.384 [2024-11-20 14:49:02.207737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.384 qpair failed and we were unable to recover it. 00:28:55.384 [2024-11-20 14:49:02.208044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.208051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.208214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.208222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.208535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.208542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.208880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.208887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.209184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.209191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.209405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.209412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.209740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.209747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.209894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.209901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.210189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.210196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.210499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.210506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.210675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.210682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.210993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.211000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.211169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.211176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.211398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.211405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.211821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.211828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.212105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.212113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.212441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.212448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.212778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.212784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.213060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.213067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.213370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.213377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.213580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.213586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.213775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.213783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.214071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.214078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.214376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.214384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.214543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.214550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.214879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.214886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.215228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.215237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.215391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.215398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.215753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.215759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.216051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.216058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.216339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.216346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.216645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.216651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.216956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.216964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.217266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.217273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.385 [2024-11-20 14:49:02.217564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.385 [2024-11-20 14:49:02.217571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.385 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.217870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.217876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.218165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.218171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.218480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.218488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.218674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.218682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.218849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.218856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.219210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.219216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.219507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.219514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.219840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.219846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.220185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.220192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.220480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.220487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.220696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.220703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.221030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.221036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.221248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.221255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.221463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.221469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.221758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.221765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.222057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.222064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.222406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.222413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.222765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.222772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.223100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.223107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.223396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.223403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.223729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.223736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.224014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.224021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.224207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.224214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.224545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.224553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.224883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.224890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.225198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.225205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.225528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.225535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.225711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.225717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.226034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.226040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.226214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.226220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.226257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.226265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.226580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.226588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.226871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.226877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.227178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.227184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.227583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.227590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.227904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.227911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.228206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.228213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.386 [2024-11-20 14:49:02.228420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.386 [2024-11-20 14:49:02.228427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.386 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.228754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.228761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.229075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.229082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.229345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.229352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.229653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.229660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.229959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.229965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.230290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.230297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.230641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.230647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.230924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.230931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.231219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.231225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.231524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.231531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.231838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.231845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.232150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.232157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.232459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.232467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.232777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.232783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.232950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.232957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.233243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.233256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.233600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.233606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.233915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.233922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.234219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.234226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.234525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.234532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.234836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.234842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.235049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.235056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.235430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.235437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.235723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.235730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.235940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.235947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.236118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.236124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.236420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.236427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.236638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.236645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.236956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.236962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.237145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.237152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.237500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.237506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.237788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.237795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.238094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.238101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.238409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.238419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.238733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.238740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.239042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.239049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.239340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.239347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.387 [2024-11-20 14:49:02.239583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.387 [2024-11-20 14:49:02.239590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.387 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.239878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.239885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.240176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.240183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.240478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.240486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.240795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.240802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.241097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.241104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.241282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.241290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.241617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.241624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.241805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.241811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.242084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.242090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.242384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.242391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.242711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.242717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.242978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.242985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.243300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.243307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.243586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.243593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.243822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.243829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.244191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.244198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.244493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.244500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.244655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.244662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.244959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.244966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.245255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.245262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.245559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.245566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.245860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.245866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.246154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.246161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.246435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.246442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.246738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.246745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.247036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.247043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.247389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.247396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.247702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.247709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.247878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.247885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.248222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.248229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.248540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.248547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.248812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.248818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.249145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.249151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.249460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.249467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.249780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.249787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.249981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.249989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.250161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.250169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.388 [2024-11-20 14:49:02.250385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.388 [2024-11-20 14:49:02.250393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.388 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.250591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.250598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.250900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.250907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.251236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.251243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.251572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.251579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.251882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.251889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.252183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.252190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.252483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.252490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.252880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.252887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.253060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.253067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.253259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.253266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.253416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.253424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.253763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.253771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.254094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.254101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.254403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.254410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.254710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.254717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.255017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.255024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.255327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.255638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.255645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.255937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.255944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.256121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.256129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.256406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.256414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.256708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.256716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.256907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.256914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.257103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.257109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.257407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.257415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.257743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.389 [2024-11-20 14:49:02.257751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.389 qpair failed and we were unable to recover it. 00:28:55.389 [2024-11-20 14:49:02.258049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.258056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.258352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.258359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.258680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.258688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.258987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.258995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.259290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.259297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.259464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.259471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.259761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.259768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.260068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.260075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.260373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.260381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.260727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.260734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.261064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.261072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.261359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.261367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.261714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.261721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.262052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.262060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.262366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.262374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.262677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.262684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.262851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.262859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.263172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.263179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.263480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.263487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.263859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.263866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.264155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.264162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.264468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.264475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.264763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.264770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.265110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.265116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.265409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.265417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.265739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.265746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.266051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.266058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.266367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.266374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.266684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.266691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.266878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.266885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.267161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.267168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.267388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.267395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.267674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.267681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.267992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.267999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.268182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.268189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.268522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.268530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.268817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.268825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.269123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.390 [2024-11-20 14:49:02.269130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.390 qpair failed and we were unable to recover it. 00:28:55.390 [2024-11-20 14:49:02.269346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.269353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.269713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.269720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.270010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.270017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.270265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.270272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.270587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.270594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.270884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.270891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.271197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.271205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.271503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.271510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.271810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.271817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.272109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.272116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.272423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.272431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.272765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.272772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.272985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.272992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.273345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.273355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.273631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.273638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.273934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.273941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.274132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.274139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.274427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.274435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.274630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.274637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.274680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.274687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.274862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.274869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.275197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.275205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.275530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.275537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.275740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.275748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.276072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.276080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.276374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.276382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.276676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.276683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.276986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.276992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.277287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.277295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.277618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.277626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.277997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.278004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.278314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.278322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.278525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.278532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.278852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.278860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.279158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.279166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.279482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.279489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.279784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.279791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.280091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.391 [2024-11-20 14:49:02.280098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.391 qpair failed and we were unable to recover it. 00:28:55.391 [2024-11-20 14:49:02.280401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.280409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.280725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.280732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.281031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.281038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.281359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.281367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.281542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.281550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.281935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.281942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.282118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.282126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.282456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.282464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.282755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.282762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.283065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.283072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.283379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.283386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.283677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.283684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.283723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.283730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.284076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.284083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.284458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.284465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.284650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.284659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.284937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.284944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.285272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.285280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.285597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.285604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.285902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.285909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.286195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.286201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.286390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.286397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.286729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.286736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.287095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.287101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.287273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.287280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.287605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.287612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.287803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.287810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.287987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.287994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.288306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.288313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.288635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.288643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.288974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.288981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.289300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.289308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.289618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.289626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.289901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.289908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.290228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.290235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.290537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.290545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.290740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.290747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.392 [2024-11-20 14:49:02.291064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.392 [2024-11-20 14:49:02.291070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.392 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.291375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.291382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.291694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.291701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.292002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.292008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.292294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.292301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.292622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.292629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.293004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.293010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.293295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.293303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.293614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.293620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.293973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.293979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.294289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.294296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.294538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.294545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.294899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.294905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.295193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.295199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.295376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.295383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.295564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.295571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.295807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.295813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.296110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.296117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.296413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.296423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.296658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.296664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.296973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.296980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.297338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.297345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.297648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.297655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.297860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.297867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.298063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.298070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.298263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.298270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.298570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.298577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.298861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.298868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.299214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.299221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.299552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.299559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.299757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.299763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.300028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.300035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.300353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.300360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.300680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.300687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.300981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.300987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.301159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.301165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.301448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.301455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.301733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.301739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.302042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.393 [2024-11-20 14:49:02.302049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.393 qpair failed and we were unable to recover it. 00:28:55.393 [2024-11-20 14:49:02.302353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.302360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.302722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.302729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.303046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.303053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.303399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.303406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.303569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.303576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.303839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.303846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.304134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.304141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.304476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.304483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.304763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.304770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.305078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.305085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.305239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.305254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.305550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.305557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.305738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.305745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.306081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.306087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.306377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.306384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.306702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.306708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.307012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.307020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.307318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.307325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.307665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.307671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.307868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.307875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.308183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.308189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.308367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.308375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.308673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.308680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.309008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.309015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.309298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.309305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.309660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.309667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.309998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.310004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.310160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.310167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.310391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.310398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.310684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.310691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.310987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.310994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.311166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.311172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.394 [2024-11-20 14:49:02.311453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.394 [2024-11-20 14:49:02.311460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.394 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.311747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.311754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.311916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.311924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.312230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.312237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.312543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.312550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.312848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.312855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.313205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.313212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.313517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.313524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.313704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.313710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.314010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.314017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.314292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.314299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.314616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.314623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.314903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.314910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.315188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.315195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.315494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.315503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.315678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.315685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.315987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.315994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.316323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.316330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.316628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.316634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.316965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.316972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.317265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.317272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.317578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.317585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.317896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.317902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.318234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.318241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.318523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.318530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.318848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.318855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.319143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.319150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.319478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.319485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.319809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.319816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.320167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.320173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.320543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.320550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.320860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.320866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.321158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.321164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.321471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.321478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.321778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.321785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.322010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.322017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.322321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.322328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.322508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.322515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.395 [2024-11-20 14:49:02.322675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.395 [2024-11-20 14:49:02.322682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.395 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.322977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.322984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.323255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.323262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.323568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.323575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.323949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.323956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.324270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.324276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.324566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.324573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.324846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.324853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.325151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.325157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.325438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.325445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.325834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.325841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.326018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.326024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.326314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.326321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.326663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.326669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.326970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.326977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.327283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.327290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.327617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.327625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.327910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.327917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.328216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.328223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.328524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.328531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.328819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.328826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.329117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.329124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.329477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.329484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.329690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.329697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.330015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.330021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.330356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.330363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.330654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.330661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.330940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.330947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.330985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.330993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.331160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.331167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.331464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.331471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.331760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.331767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.332059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.332066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.332370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.332377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.332694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.332701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.333054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.333060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.333361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.333369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.333681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.333688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.396 [2024-11-20 14:49:02.333982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.396 [2024-11-20 14:49:02.333989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.396 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.334270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.334277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.334451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.334457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.334770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.334777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.335066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.335073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.335385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.335393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.335670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.335676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.336061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.336067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.336356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.336363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.336672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.336679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.336976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.336983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.337288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.337295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.337663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.337670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.337814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.337821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.338084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.338090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.338411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.338418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.338736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.338743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.339088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.339094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.339292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.339301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.339634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.339640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.339933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.339940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.340237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.340246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.340546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.340553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.340869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.340875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.341179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.341186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.341532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.341539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.341831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.341837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.342124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.342130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.342415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.342422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.342721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.342727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.343033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.343040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.343214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.343221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.343523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.343530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.343835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.343842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.344143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.344150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.344438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.344445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.344767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.344773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.345063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.345069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.397 [2024-11-20 14:49:02.345358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.397 [2024-11-20 14:49:02.345365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.397 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.345669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.345676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.346015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.346023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.346191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.346198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.346398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.346405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.346448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.346455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.346763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.346770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.347061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.347068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.347372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.347379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.347666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.347673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.347999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.348006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.348321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.348328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.348638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.348645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.349004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.349011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.349298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.349306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.349628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.349635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.349948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.349954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.350252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.350259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.350566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.350573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.350759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.350766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.351102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.351111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.351397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.351404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.351680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.351687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.351982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.351989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.352280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.352287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.352606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.352613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.352873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.352880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.353239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.353248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.353534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.353541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.353829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.353836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.354128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.354135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.354418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.354425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.354725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.354732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.354982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.354989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.355311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.398 [2024-11-20 14:49:02.355318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.398 qpair failed and we were unable to recover it. 00:28:55.398 [2024-11-20 14:49:02.355559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.355566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.355937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.355944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.356270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.356277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.356587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.356593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.356895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.356902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.357198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.357205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.357325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.357332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.357622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.357629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.357935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.357942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.358275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.358283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.358582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.358589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.358876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.358883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.359169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.359175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.359338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.359346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.359523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.359530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.359865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.359871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.360204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.360211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.360533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.360540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.360827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.360833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.361135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.361142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.361320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.361328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.361549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.361556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.361889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.361896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.362195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.362202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.362525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.362532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.362825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.362833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.363119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.363125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.363286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.363293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.363458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.363464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.363772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.363778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.364165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.364172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.364464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.364471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.364810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.364817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.365133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.365140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.365453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.365460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.365749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.365756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.366051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.366058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.399 [2024-11-20 14:49:02.366211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.399 [2024-11-20 14:49:02.366218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.399 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.366511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.366518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.366810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.366817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.367146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.367153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.367335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.367343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.367611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.367617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.367945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.367952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.368285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.368292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.368519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.368526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.368864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.368870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.369154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.369161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.369476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.369483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.369782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.369789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.370065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.370071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.370378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.370384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.370698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.370705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.371009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.371016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.371310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.371318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.371645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.371652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.371957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.371964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.372126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.372133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.372314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.372321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.372372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.372379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.372562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.372569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.372832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.372839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.373139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.373146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.373446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.373453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.373749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.373756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.374073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.374082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.374372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.374380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.374691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.374697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.375048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.375054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.375343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.375350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.375789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.375796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.376074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.376080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.376277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.376283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.376559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.376565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.376762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.376769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.377080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.400 [2024-11-20 14:49:02.377087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.400 qpair failed and we were unable to recover it. 00:28:55.400 [2024-11-20 14:49:02.377417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.377424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.377817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.377823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.378030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.378036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.378219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.378227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.378444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.378451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.378784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.378791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.379131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.379137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.379419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.379426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.379603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.379610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.379985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.379992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.380293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.380300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.380617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.380624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.380990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.380997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.381151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.381158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.381483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.381490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.381662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.381669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.382001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.382007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.382329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.382336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.382662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.382669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.382961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.382968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.383283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.383290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.383586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.383593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.383905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.383912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.384260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.384267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.384565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.384572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.384901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.384908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.385221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.385228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.385563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.385571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.385847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.385853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.386250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.386258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.386606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.386613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.386804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.386811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.386996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.387003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.387336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.387343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.387661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.387667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.387973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.387980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.388287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.388294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.401 [2024-11-20 14:49:02.388465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.401 [2024-11-20 14:49:02.388471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.401 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.388775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.388781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.388915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.388922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.389190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.389197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.389519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.389526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.389858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.389865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.390182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.390189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.390533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.390540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.390674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.390680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.390952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.390959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.391261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.391269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.391425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.391432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.391809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.391815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.392104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.392111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.392406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.392413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.392719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.392726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.392918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.392925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.393221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.393228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.393603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.393610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.393899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.393906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.394209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.394216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.394515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.394522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.394820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.394827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.395174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.395181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.395476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.395483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.395685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.395692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.396019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.396025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.396174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.396181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.396528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.396535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.396695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.396702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.396866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.396873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.397170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.397177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.397483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.397491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.397726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.397733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.398042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.398048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.398350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.398357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.398665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.398672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.399043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.399050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.402 [2024-11-20 14:49:02.399223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.402 [2024-11-20 14:49:02.399230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.402 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.399538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.399545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.399865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.399871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.400210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.400216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.400380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.400387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.400713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.400720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.401108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.401114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.401292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.401299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.401622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.401628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.401924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.401930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.402090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.402097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.402394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.402401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.402610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.402620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.402790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.402797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.403109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.403116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.403312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.403319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.403503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.403510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.403825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.403832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.404040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.404047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.404347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.404354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.404681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.404687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.404983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.404990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.405227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.405233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.405404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.405412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.405709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.405716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.406010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.406016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.406251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.406258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.406438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.406446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.406598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.406605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.406833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.406840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.407124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.407130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.407428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.407435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.407747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.407754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.408051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.403 [2024-11-20 14:49:02.408058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.403 qpair failed and we were unable to recover it. 00:28:55.403 [2024-11-20 14:49:02.408256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.408265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.408540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.408546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.408845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.408852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.409156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.409163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.409462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.409469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.409766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.409772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.410065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.410071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.410373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.410380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.410677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.410683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.410999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.411006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.411299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.411306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.411619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.411626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.411942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.411949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.412238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.412247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.412552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.412558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.412732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.412739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.413042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.413048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.413370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.413378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.413684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.413691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.413988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.413995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.414342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.414349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.414649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.414656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.414953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.414961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.415249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.415256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.415574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.415580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.415864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.415871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.416168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.416174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.416555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.416562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.416857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.416864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.417211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.417218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.417393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.417400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.417682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.417689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.418038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.418044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.418333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.418340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.418545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.418552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.404 [2024-11-20 14:49:02.418853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.404 [2024-11-20 14:49:02.418860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.404 qpair failed and we were unable to recover it. 00:28:55.681 [2024-11-20 14:49:02.419188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.681 [2024-11-20 14:49:02.419196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.681 qpair failed and we were unable to recover it. 00:28:55.681 [2024-11-20 14:49:02.419544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.681 [2024-11-20 14:49:02.419551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.681 qpair failed and we were unable to recover it. 00:28:55.681 [2024-11-20 14:49:02.419864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.681 [2024-11-20 14:49:02.419871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.681 qpair failed and we were unable to recover it. 00:28:55.681 [2024-11-20 14:49:02.420175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.681 [2024-11-20 14:49:02.420182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.681 qpair failed and we were unable to recover it. 00:28:55.681 [2024-11-20 14:49:02.420582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.681 [2024-11-20 14:49:02.420591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.681 qpair failed and we were unable to recover it. 00:28:55.681 [2024-11-20 14:49:02.420862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.681 [2024-11-20 14:49:02.420869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.681 qpair failed and we were unable to recover it. 00:28:55.681 [2024-11-20 14:49:02.421191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.681 [2024-11-20 14:49:02.421198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.421397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.421405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.421577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.421583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.421906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.421913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.422070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.422077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.422445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.422453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.422774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.422781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.423137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.423143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.423491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.423498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.423704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.423711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.424018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.424025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.424392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.424399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.424561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.424568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.424588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1506020 (9): Bad file descriptor 00:28:55.682 [2024-11-20 14:49:02.425149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.425193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.425642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.425687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.426028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.426036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.426336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.426343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.426635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.426643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.426993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.427000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.427320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.427327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.427514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.427522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.427828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.427835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.428051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.428058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.428273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.428280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.428532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.428539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.428830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.428837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.429025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.429032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.429371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.429378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.429641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.429648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.429949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.429956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.430274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.430281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.682 qpair failed and we were unable to recover it. 00:28:55.682 [2024-11-20 14:49:02.430588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.682 [2024-11-20 14:49:02.430595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.430993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.431000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.431186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.431193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.431502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.431509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.431805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.431812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.432110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.432117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.432415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.432422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.432602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.432609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.432885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.432892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.433235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.433242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.433546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.433552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.433867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.433873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.434215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.434221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.434538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.434545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.434842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.434849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.435007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.435014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.435317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.435325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.435664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.435671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.436004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.436011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.436310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.436317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.436637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.436645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.436987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.436993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.437321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.437328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.437628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.437635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.437868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.437876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.438175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.438182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.438363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.438371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.438700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.438706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.439017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.439024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.683 [2024-11-20 14:49:02.439347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.683 [2024-11-20 14:49:02.439354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.683 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.439642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.439649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.439942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.439949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.440263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.440270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.440540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.440547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.440913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.440921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.441221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.441228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.441529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.441536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.441827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.441834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.442119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.442126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.442311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.442318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.442546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.442553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.442879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.442885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.443101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.443108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.443418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.443425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.443752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.443758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.444069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.444076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.444327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.444334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.444505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.444512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.444767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.444774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.445074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.445080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.445379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.445386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.445672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.445679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.445973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.445979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.446275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.446282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.446550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.446556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.446850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.446857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.447146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.447153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.447464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.447470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.447759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.447766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.684 [2024-11-20 14:49:02.448064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.684 [2024-11-20 14:49:02.448071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.684 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.448377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.448386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.448675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.448681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.448968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.448974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.449333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.449341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.449666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.449673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.450016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.450023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.450315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.450322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.450610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.450616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.450984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.450991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.451289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.451296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.451610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.451616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.451913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.451920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.452182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.452189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.452485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.452492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.452801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.452808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.453096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.453103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.453279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.453286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.453555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.453561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.453849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.453856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.454145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.454152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.454453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.454460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.454746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.454752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.454904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.454911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.455073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.455080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.455400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.455407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.455706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.455713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.455987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.455994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.456183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.456191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.456398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.456404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.456671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.456678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.456983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.456989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.457280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.457287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.457688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.685 [2024-11-20 14:49:02.457695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.685 qpair failed and we were unable to recover it. 00:28:55.685 [2024-11-20 14:49:02.457989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.457996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.458296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.458303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.458631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.458637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.459002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.459009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.459295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.459302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.459611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.459617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.459795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.459802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.460016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.460025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.460299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.460307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.460500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.460508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.460675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.460682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.460997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.461004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.461300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.461307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.461673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.461680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.461972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.461979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.462280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.462288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.462452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.462460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.462720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.462727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.463025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.463033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.463327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.463334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.463656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.463662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.464029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.464036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.464320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.464327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.464646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.464653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.464987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.464993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.465286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.465293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.465602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.465608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.465900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.465907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.466070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.466076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.466253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.466260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.466554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.466560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.466797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.686 [2024-11-20 14:49:02.466804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.686 qpair failed and we were unable to recover it. 00:28:55.686 [2024-11-20 14:49:02.467082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.467089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.467166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.467172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.467487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.467494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.467789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.467796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.468076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.468083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.468480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.468487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.468779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.468787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.469097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.469104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.469408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.469415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.469758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.469765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.469962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.469969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.470293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.470301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.470593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.470600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.470911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.470917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.471083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.471091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.471391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.471401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.471733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.471740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.472038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.472045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.472215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.472222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.472443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.472451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.472667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.472674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.472987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.472994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.687 [2024-11-20 14:49:02.473194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.687 [2024-11-20 14:49:02.473200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.687 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.473478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.473485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.473802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.473809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.473964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.473972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.474166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.474173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.474334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.474341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.474672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.474679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.474999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.475006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.475297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.475304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.475607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.475614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.475827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.475833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.476002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.476008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.476280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.476287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.476586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.476593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.476876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.476883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.477083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.477090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.477428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.477435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.477607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.477614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.477922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.477928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.478081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.478088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.478280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.478287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.478579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.478586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.478933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.478939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.479234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.479240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.479550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.479556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.479886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.479892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.480176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.480183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.480486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.480493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.480790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.480797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.480951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.480958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.481143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.481150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.481391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.481398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.481691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.481698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.482015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.482023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.482328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.688 [2024-11-20 14:49:02.482336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.688 qpair failed and we were unable to recover it. 00:28:55.688 [2024-11-20 14:49:02.482651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.482658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.482953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.482959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.483116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.483123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.483430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.483437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.483763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.483769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.484080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.484086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.484373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.484380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.484679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.484685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.484992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.484999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.485307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.485314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.485632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.485638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.485914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.485921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.486229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.486236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.486606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.486614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.486920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.486926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.487209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.487216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.487373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.487380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.487651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.487658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.487788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.487796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.487968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.487975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.488308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.488316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.488602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.488609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.488900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.488907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.489089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.489096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.489431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.489437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.489725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.489732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.489891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.489897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.490085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.490091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.490409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.490415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.689 [2024-11-20 14:49:02.490701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.689 [2024-11-20 14:49:02.490707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.689 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.491005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.491012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.491174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.491182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.491543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.491550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.491827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.491834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.492136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.492143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.492347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.492354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.492399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.492405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.492571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.492577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.492856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.492864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.493196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.493203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.493552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.493559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.493842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.493849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.494059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.494066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.494319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.494326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.494620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.494627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.494963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.494969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.495129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.495136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.495488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.495496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.495761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.495767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.496052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.496059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.496357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.496363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.496666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.496672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.496993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.497000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.497298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.497305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.497592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.497599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.497897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.497904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.498080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.498088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.498453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.498460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.498753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.498759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.499051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.499058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.690 [2024-11-20 14:49:02.499194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.690 [2024-11-20 14:49:02.499201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.690 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.499477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.499484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.499710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.499717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.500025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.500032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.500221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.500228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.500407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.500414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.500716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.500722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.501030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.501037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.501391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.501398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.501715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.501723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.501998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.502005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.502302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.502310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.502475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.502482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.502641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.502648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.502696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.502702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.502981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.502988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.503317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.503325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.503629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.503636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.503942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.503950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.504273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.504280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.504642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.504649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.505031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.505037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.505211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.505217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.505391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.505399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.505715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.505723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.505998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.506005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.506292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.506300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.506604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.506611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.506784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.506791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.507123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.507131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.507291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.507299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.507650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.507657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.507955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.507961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.508253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.691 [2024-11-20 14:49:02.508261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.691 qpair failed and we were unable to recover it. 00:28:55.691 [2024-11-20 14:49:02.508456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.508462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.508772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.508780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.509065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.509073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.509238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.509248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.509477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.509484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.509694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.509701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.509891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.509898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.510257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.510264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.510447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.510454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.510577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.510584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.510877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.510885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.511211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.511218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.511542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.511549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.511853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.511861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.512160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.512167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.512480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.512487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.512812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.512819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.513125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.513132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.513442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.513449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.513593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.513601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.513880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.513889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.514054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.514061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.514351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.514359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.514535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.514543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.514892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.514899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.515182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.515189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.692 qpair failed and we were unable to recover it. 00:28:55.692 [2024-11-20 14:49:02.515500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.692 [2024-11-20 14:49:02.515507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.515690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.515697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.516038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.516045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.516216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.516224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.516562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.516569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.516850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.516856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.517170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.517177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.517500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.517507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.517800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.517807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.518109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.518116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.518411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.518418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.518746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.518753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.519062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.519069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.519395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.519402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.519731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.519738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.520028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.520035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.520211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.520219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.520493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.520501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.520774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.520780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.521074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.521081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.521359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.521367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.521722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.521729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.521916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.521923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.522221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.522228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.522535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.522542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.522713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.522721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.523042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.523049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.523356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.523363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.523523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.523531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.523701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.523708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.523981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.523989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.524292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.524300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.524507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.524514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.524930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.524937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.525121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.525128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.693 [2024-11-20 14:49:02.525290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.693 [2024-11-20 14:49:02.525297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.693 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.525604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.525611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.525902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.525910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.526208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.526216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.526512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.526520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.526821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.526828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.527019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.527025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.527345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.527353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.527466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.527473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.527697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.527704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.528005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.528013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.528177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.528185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.528406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.528414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.528724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.528731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.529033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.529040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.529395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.529403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.529612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.529619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.529899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.529907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.530190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.530197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.530508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.530515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.530843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.530850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.531131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.531138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.531506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.531514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.531803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.531810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.532113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.532120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.532416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.532424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.532732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.532739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.533000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.533008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.533173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.533180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.533545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.533552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.533729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.533737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.534069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.534077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.534365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.694 [2024-11-20 14:49:02.534373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.694 qpair failed and we were unable to recover it. 00:28:55.694 [2024-11-20 14:49:02.534670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.534677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.534980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.534987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.535291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.535299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.535613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.535621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.535941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.535949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.536241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.536252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.536569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.536577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.536873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.536881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.537182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.537190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.537434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.537441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.537790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.537798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.538130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.538137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.538474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.538481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.538749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.538756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.539081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.539087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.539394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.539402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.539706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.539713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.539892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.539899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.540181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.540189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.540477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.540485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.540639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.540646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.540995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.541001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.541162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.541169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.541525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.541532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.541847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.541854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.542186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.542193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.542542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.542550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.542860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.542867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.543139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.543145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.543468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.543476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.543778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.543785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.695 [2024-11-20 14:49:02.544063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.695 [2024-11-20 14:49:02.544070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.695 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.544383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.544391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.544718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.544726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.545036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.545042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.545349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.545357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.545544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.545551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.545736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.545744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.546047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.546055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.546341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.546348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.546638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.546645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.546952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.546958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.547102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.547109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.547383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.547390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.547746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.547753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.547794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.547801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.547966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.547972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.548267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.548274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.548635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.548642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.548962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.548969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.549282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.549289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.549582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.549589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.549744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.549752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.550044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.550051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.550354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.550361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.550651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.550659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.550868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.550875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.551142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.551149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.551326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.551333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.551635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.551641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.551929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.551936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.552257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.552264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.552545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.696 [2024-11-20 14:49:02.552552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.696 qpair failed and we were unable to recover it. 00:28:55.696 [2024-11-20 14:49:02.552869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.552876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.553272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.553279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.553576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.553583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.553906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.553913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.554064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.554071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.554375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.554382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.554641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.554648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.554803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.554811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.555086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.555095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.555340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.555347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.555616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.555623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.555917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.555924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.556051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.556058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.556369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.556376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.556539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.556548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.556834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.556842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.557186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.557194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.557501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.557508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.557801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.557807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.558133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.558141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.558430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.558437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.558620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.558627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.558883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.558890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.559203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.559210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.559363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.559371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.559682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.559689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.560035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.560043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.560336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.560350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.560694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.560701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.560993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.561001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.561132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.561139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.697 [2024-11-20 14:49:02.561490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.697 [2024-11-20 14:49:02.561498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.697 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.561815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.561822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.562178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.562185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.562535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.562543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.562736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.562744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.563048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.563054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.563324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.563332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.563655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.563662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.564014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.564021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.564188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.564195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.564619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.564626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.564784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.564791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.565093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.565100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.565404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.565412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.565601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.565607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.565804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.565811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.566088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.566095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.566402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.566409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.566758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.566765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.698 [2024-11-20 14:49:02.567088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.698 [2024-11-20 14:49:02.567095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.698 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.567272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.567279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.567609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.567616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.567890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.567898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.567937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.567946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.568222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.568229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.568533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.568540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.568828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.568835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.569202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.569209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.569506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.569513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.569913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.569920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.570205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.570212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.570578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.570585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.570931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.570939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.571305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.571313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.571655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.571662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.571924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.571931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.572116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.572123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.572468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.572475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.572799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.572806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.573093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.573100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.573287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.573294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.573631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.573639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.573803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.573810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.574119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.574126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.574304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.574312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.574479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.574487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.574763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.574771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.575098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.575105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.575394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.575401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.575574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.575581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.699 [2024-11-20 14:49:02.575950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.699 [2024-11-20 14:49:02.575957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.699 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.576268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.576276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.576655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.576661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.576932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.576941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.577141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.577148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.577360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.577368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.577690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.577697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.577855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.577862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.578251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.578259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.578543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.578550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.578852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.578859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.579192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.579200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.579386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.579394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.579548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.579557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.579609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.579616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.579883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.579890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.580105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.580113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.580421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.580428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.580724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.580732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.581029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.581037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.581425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.581432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.581744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.581751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.581940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.581946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.582282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.582289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.582594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.582600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.582897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.582904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.583199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.583205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.583378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.583386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.583711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.583718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.584047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.584054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.584344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.584351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.584666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.584673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.584956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.700 [2024-11-20 14:49:02.584963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.700 qpair failed and we were unable to recover it. 00:28:55.700 [2024-11-20 14:49:02.585238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.585248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.585545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.585552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.585831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.585837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.586172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.586179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.586482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.586489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.586857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.586864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.587172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.587178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.587488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.587495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.587796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.587804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.588095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.588102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.588391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.588398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.588614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.588621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.588812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.588819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.589085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.589091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.589384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.589391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.589772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.589778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.590097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.590104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.590304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.590311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.590616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.590622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.590930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.590938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.591274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.591283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.591549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.591557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.591859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.591866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.592155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.592161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.592495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.592502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.592811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.592818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.593160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.593167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.593453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.593459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.593772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.593779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.594094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.594101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.594411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.594419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.594738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.594745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.594927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.594933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.595212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.595219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.701 [2024-11-20 14:49:02.595554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.701 [2024-11-20 14:49:02.595562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.701 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.595877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.595884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.596179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.596186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.596388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.596395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.596558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.596565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.596916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.596922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.597209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.597215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.597500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.597507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.597807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.597814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.598112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.598118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.598442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.598449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.598761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.598768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.599054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.599061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.599376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.599383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.599697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.599704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.599862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.599869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.600054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.600061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.600334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.600341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.600708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.600715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.601037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.601043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.601362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.601369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.601590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.601597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.601868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.601875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.602059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.602066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.602331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.602338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.602667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.602674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.602870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.602878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.603058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.603066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.603375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.603383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.603746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.603754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.604042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.702 [2024-11-20 14:49:02.604050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.702 qpair failed and we were unable to recover it. 00:28:55.702 [2024-11-20 14:49:02.604363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.604370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.604537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.604545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.604847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.604853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.605148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.605154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.605506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.605514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.605808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.605815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.605958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.605965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.606349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.606356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.606691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.606697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.606970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.606977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.607343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.607351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.607631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.607639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.607923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.607930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.608223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.608231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.608504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.608513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.608815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.608823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.609105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.609112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.609491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.609499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.609778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.609785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.610071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.610078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.610357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.610365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.610626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.610632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.610825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.610832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.611147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.611154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.611517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.611526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.703 qpair failed and we were unable to recover it. 00:28:55.703 [2024-11-20 14:49:02.611704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.703 [2024-11-20 14:49:02.611711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.611904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.611912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.612211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.612219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.612541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.612549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.612839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.612846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.613133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.613140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.613321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.613328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.613621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.613628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.613927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.613934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.614248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.614256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.614533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.614542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.614822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.614829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.615161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.615168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.615470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.615478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.615831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.615838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.616128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.616135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.616425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.616432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.616753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.616762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.616930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.616937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.617239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.617249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.617551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.617558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.617748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.617755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.618066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.618073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.618378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.618386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.618712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.618719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.619067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.619074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.619238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.619249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.619541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.619548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.619830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.619837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.620138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.620144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.620487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.704 [2024-11-20 14:49:02.620494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.704 qpair failed and we were unable to recover it. 00:28:55.704 [2024-11-20 14:49:02.620661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.620668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.620841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.620847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.621194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.621201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.621368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.621375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.621551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.621558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.621847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.621854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.622144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.622151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.622412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.622419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.622580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.622586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.622821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.622829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.623149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.623156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.623463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.623471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.623787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.623794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.624080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.624087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.624228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.624235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.624598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.624606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.624774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.624781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.625108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.625115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.625291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.625298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.625475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.625483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.625806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.625813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.626130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.626137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.626333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.626340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.626647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.626654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.626863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.626870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.627040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.627047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.627264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.627271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.627690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.627697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.627892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.627899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.628166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.628173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.628473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.628480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.705 qpair failed and we were unable to recover it. 00:28:55.705 [2024-11-20 14:49:02.628761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.705 [2024-11-20 14:49:02.628769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.629101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.629108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.629421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.629428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.629724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.629731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.629957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.629963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.630285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.630292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.630599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.630605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.630789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.630795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.630968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.630975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.631310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.631318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.631492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.631499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.631793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.631801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.632055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.632062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.632358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.632365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.632666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.632673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.632962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.632969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.633164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.633171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.633504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.633511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.633874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.633881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.634040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.634046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.634274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.634281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.634645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.634651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.635006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.635013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.635085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.635093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.635408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.635416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.635714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.635721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.636009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.636016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.636316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.636323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.636625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.636633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.636928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.636934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.637207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.706 [2024-11-20 14:49:02.637214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.706 qpair failed and we were unable to recover it. 00:28:55.706 [2024-11-20 14:49:02.637563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.637570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.637886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.637893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.638204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.638211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.638383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.638390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.638693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.638700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.638995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.639002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.639377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.639384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.639682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.639689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.639996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.640003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.640313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.640323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.640624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.640631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.640931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.640938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.641220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.641227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.641546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.641553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.641916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.641922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.642235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.642242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.642588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.642595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.642876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.642882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.643173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.643180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.643525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.643532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.643789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.643796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.643946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.643953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.644277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.644284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.644596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.644604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.644869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.644876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.645176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.645182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.645384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.645391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.645563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.645571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.645867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.645874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.646164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.646170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.646452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.646459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.646640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.646647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.707 [2024-11-20 14:49:02.646971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.707 [2024-11-20 14:49:02.646979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.707 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.647273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.647280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.647592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.647599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.647914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.647921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.648195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.648202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.648476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.648485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.648827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.648835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.649123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.649130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.649401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.649408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.649580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.649587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.649779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.649785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.649920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.649926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.650229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.650235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.650618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.650625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.650814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.650821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.651138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.651145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.651423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.651430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.651735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.651742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.652075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.652082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.652372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.652379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.652568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.652575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.652881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.652888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.653184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.653191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.653497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.653504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.653845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.653853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.654013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.654020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.654340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.654347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.654712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.654719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.654760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.654766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.655113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.655121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.655448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.655456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.655775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.655782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.656110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.656117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.708 qpair failed and we were unable to recover it. 00:28:55.708 [2024-11-20 14:49:02.656399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.708 [2024-11-20 14:49:02.656406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.656759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.656767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.657087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.657094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.657406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.657414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.657704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.657711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.657975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.657981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.658260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.658267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.658560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.658568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.658749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.658756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.658943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.658950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.659266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.659274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.659581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.659588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.659882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.659891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.660204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.660211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.660413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.660420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.660729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.660736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.661071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.661078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.661395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.661403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.661692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.661699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.662009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.662015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.662312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.662319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.662730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.662736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.663028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.663035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.663332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.663340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.663670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.663677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.663984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.663991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-11-20 14:49:02.664301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-11-20 14:49:02.664314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.664620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.664627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.664915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.664922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.664976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.664983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.665150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.665157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.665462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.665470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.665856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.665863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.666133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.666140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.666426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.666433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.666727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.666735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.667039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.667047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.667332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.667339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.667666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.667673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.667969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.667977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.668265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.668272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.668340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.668347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.668610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.668617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.668902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.668909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.669224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.669230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.669406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.669413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.669705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.669712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.669901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.669908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.670183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.670190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.670460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.670468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.670770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.670777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.671062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.671069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.671297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.671304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.671596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.671603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.671889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.671896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-11-20 14:49:02.672200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-11-20 14:49:02.672207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.672532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.672540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.672893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.672900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.673074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.673081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.673285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.673292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.673463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.673470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.673779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.673786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.674084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.674091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.674460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.674468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.674783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.674790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.675074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.675081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.675380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.675386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.675682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.675689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.675900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.675907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.676171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.676178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.676468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.676476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.676825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.676832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.677154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.677160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.677473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.677480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.677771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.677777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.678131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.678137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.678427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.678435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.678617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.678624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.678818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.678825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.679103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.679112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.679452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.679459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.679771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.679778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.680053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.680060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.680232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.680240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.680423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.680430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.680795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.680802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.681089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.681095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.681381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.681388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-11-20 14:49:02.681673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-11-20 14:49:02.681681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.681988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.681996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.682323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.682331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.682514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.682521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.682793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.682800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.682977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.682984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.683321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.683328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.683656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.683662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.683817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.683825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.684123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.684131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.684452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.684459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.684752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.684759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.685077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.685084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.685274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.685281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.685543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.685550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.685837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.685845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.686140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.686147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.686326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.686333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.686549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.686556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.686733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.686741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.686995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.687002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.687311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.687318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.687627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.687634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.687816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.687823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.688126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.688133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.688453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.688460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.688791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.688798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.689112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.689119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.689413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.689421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.689600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-11-20 14:49:02.689607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-11-20 14:49:02.689916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.689923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.690248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.690257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.690537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.690545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.690950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.690956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.691235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.691242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.691538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.691546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.691867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.691874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.692163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.692170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.692459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.692466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.692768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.692776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.692849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.692855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.693028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.693035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.693345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.693352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.693669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.693675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.693979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.693986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.694313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.694320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.694629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.694636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.694941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.694949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.695259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.695266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.695562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.695568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.695888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.695895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.696206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.696212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.696372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.696380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.696716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.696723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.697013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.697020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.697194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.697201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.697473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.697480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.697719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-11-20 14:49:02.697725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-11-20 14:49:02.698031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.698038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.698219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.698226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.698415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.698423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.698710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.698716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.699001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.699008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.699298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.699305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.699649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.699657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.699935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.699943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.700237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.700248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.700413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.700420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.700749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.700756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.700921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.700928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.701221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.701229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.701514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.701523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.701817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.701824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.702119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.702127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.702504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.702512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.702829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.702837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.703133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.703140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.703338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.703348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.703683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.703689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.704007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.704013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.704289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.704296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.704659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.704666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.704956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.704963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.705251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.705258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.705539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.705546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.705824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.705831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.706123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.706129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-11-20 14:49:02.706436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-11-20 14:49:02.706443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.706749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.706756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.706943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.706950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.707262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.707269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.707541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.707547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.707849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.707855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.708162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.708169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.708344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.708351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.708631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.708639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.709021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.709028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.709228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.709235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.709486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.709493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.709808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.709815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.710105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.710111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.710399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.710407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.710696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.710703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.710855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.710862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.711176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.711183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.711492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.711499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.711798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.711804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.712083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.712089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.712381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.712388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.712563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.712569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.712890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.712898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.713071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.713080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.713270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.713277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.713464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.713470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.713808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.713815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.714086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-11-20 14:49:02.714092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-11-20 14:49:02.714408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.714415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.714736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.714744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.715071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.715078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.715371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.715378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.715640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.715648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.715979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.715985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.716266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.716274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.716578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.716586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.716880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.716887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.717056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.717063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.717354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.717361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.717664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.717671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.717964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.717970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.718261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.718268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.718423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.718430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.718774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.718782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.719095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.719102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.719283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.719290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.719585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.719592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.719773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.719780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.720082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.720088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.720266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.720273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.720602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.720609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.720929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.720936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.720999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.721006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.721216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.721223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.721527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.721534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.721742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.721749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.722088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.722094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.722271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.722279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.722570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-11-20 14:49:02.722576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-11-20 14:49:02.722867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-11-20 14:49:02.722874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-11-20 14:49:02.723071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-11-20 14:49:02.723078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-11-20 14:49:02.723399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-11-20 14:49:02.723406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-11-20 14:49:02.723578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-11-20 14:49:02.723585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-11-20 14:49:02.723880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-11-20 14:49:02.723888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-11-20 14:49:02.724091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-11-20 14:49:02.724098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-11-20 14:49:02.724381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-11-20 14:49:02.724389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-11-20 14:49:02.724673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-11-20 14:49:02.724680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.725020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.725028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.725320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.725329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.725522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.725529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.725722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.725730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.725889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.725895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.726167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.726174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.726475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.726482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.726818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.726826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.727019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.727026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.727323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.727331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.727506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.727513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.727688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.727695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.728032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.728039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.728351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.728358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.728681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.728689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.728989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.728996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.729314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.729321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.729692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.729699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.729988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.729994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.730173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.730180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.730481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.730489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.730775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.730782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.731101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.731107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.731269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.731276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.731441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.731447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.731791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.731799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.732067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.732074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.732363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.732370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.732678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.732685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.732877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.732884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.733189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.733196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.733514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.733521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-11-20 14:49:02.733719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-11-20 14:49:02.733726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.734039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.734046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.734352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.734359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.734657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.734664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.734719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.734727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.735004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.735011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.735181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.735187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.735477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.735484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.735769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.735777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.736103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.736111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.736438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.736445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.736768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.736774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.737080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.737087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.737390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.737397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.737692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.737699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.737895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.737902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.738189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.738197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.738495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.738503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.738823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.738829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.738917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.738923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.739103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.739109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.739403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.739409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.739725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.739732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.739928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.739935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.740221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.740228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.740397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.740403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.740714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.740721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.741016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.741023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.741242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.741257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.741592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.741600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4099736 Killed "${NVMF_APP[@]}" "$@" 00:28:55.999 [2024-11-20 14:49:02.741892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.741900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.742250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.742258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-11-20 14:49:02.742615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.742623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:55.999 [2024-11-20 14:49:02.742917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-11-20 14:49:02.742924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:56.000 [2024-11-20 14:49:02.743219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.743226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.000 [2024-11-20 14:49:02.743521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.743528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.000 [2024-11-20 14:49:02.743808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.743816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.000 [2024-11-20 14:49:02.744143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.744151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.744352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.744360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.744654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.744662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.744971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.744978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.745278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.745286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.745587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.745594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.745881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.745889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.746185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.746192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.746496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.746503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.746785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.746794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.747079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.747086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.747408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.747416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.747726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.747734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.748028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.748036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.748234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.748241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.748428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.748436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.748713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.748719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.749004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.749011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.749202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.749210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.749550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.749557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.749851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.749858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4100844 00:28:56.000 [2024-11-20 14:49:02.750147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.750156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4100844 00:28:56.000 [2024-11-20 14:49:02.750423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4100844 ']' 00:28:56.000 [2024-11-20 14:49:02.750432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:56.000 [2024-11-20 14:49:02.750757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.750765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.000 14:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.000 [2024-11-20 14:49:02.750937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.750946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.751220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.751228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.751419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.751428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-11-20 14:49:02.751702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-11-20 14:49:02.751710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.751973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.751980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.752249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.752257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.752422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.752430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.752726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.752734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.753036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.753044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.753210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.753219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.753324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.753331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.753626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.753633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.753809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.753817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.754143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.754151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.754324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.754331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.754684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.754691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.754865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.754874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.755152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.755161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.755457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.755465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.755752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.755760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.755942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.755950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.756228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.756235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.756646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.756671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.756869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.756880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.757066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.757077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.757385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.757397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.757710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.757722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.757921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.757930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.758208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.758219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.758542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.758556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.758867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.758875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.759187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.759198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.759405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.759417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.759608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.759617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.759854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.759866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.760054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.760063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.760357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.760367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.760678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.760690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.761027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.761038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.761359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.761368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.761415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.761422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-11-20 14:49:02.761613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-11-20 14:49:02.761628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.761955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.761968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.762233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.762241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.762335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.762343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.762677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.762685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.762986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.762994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.763214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.763221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.763406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.763415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.763734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.763742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.764086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.764094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.764406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.764415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.764726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.764734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.765036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.765046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.765366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.765376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.765682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.765690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.765878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.765887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.766083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.766091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.766363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.766373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.766667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.766676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.766980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.766989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.767299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.767311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.767636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.767648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.767956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.767966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.771252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.771272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.771634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.771646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.771981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.771992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.772200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.772210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.772595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-11-20 14:49:02.772609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-11-20 14:49:02.772960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.772973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.773164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.773173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.773608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.773622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.773806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.773816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.774216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.774231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.774506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.774517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.774847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.774859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.775153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.775163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.775603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.775616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.775935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.775948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.776359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.776371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.776726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.776739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.777066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.777077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.777396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.777406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.777725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.777736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.777922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.777932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.778276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.778288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.778640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.778650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.778960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.778969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.779261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.779271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.779654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.779664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.780039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.780048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.780387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.780397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.780718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.780727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.781031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.781042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.781395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.781406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.781625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.781636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.781732] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:28:56.003 [2024-11-20 14:49:02.781763] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.003 [2024-11-20 14:49:02.781835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.781844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.782145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.782154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.782554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.782564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.782883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.782892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.783197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.783206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.783528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.783537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.783849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.783859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.784190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.784199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-11-20 14:49:02.784516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-11-20 14:49:02.784525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.784925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.784934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.785130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.785137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.785461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.785472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.785785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.785794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.786008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.786015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.786221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.786229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.786551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.786560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.786872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.786885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.787197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.787207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.787413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.787421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.787627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.787635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.787940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.787950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.788279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.788288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.788566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.788573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.788878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.788886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.789173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.789182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.789485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.789493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.789788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.789795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.789988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.789996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.790323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.790521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.790530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.790837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.790845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.791140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.791148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.791321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.791329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.791673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.791680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.791976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.791983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.792207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.792215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.792527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.792535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.792686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.792694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.792981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.792988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.793175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.793183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.793341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.793349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.793658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.793665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.793864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.793871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-11-20 14:49:02.794170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-11-20 14:49:02.794177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.794482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.794490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.794830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.794838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.795230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.795238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.795564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.795572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.795912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.795919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.796072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.796080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.796230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.796237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.796407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.796414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.796715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.796725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.797016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.797023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.797198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.797205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.797493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.797500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.797774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.797781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.797982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.797989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.798290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.798297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.798626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.798633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.798934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.798941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.799266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.799273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.799602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.799609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.799811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.799818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.799990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.799998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.800314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.800322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.800608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.800615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.800899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.800905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.801219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.801226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.801613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.801621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.801986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.801993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.802278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.802285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.802573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.802581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.802770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.802776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.803158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.803165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.803479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.803487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.803799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.803807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.804110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.804118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.804431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.804438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.804778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.804785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-11-20 14:49:02.805090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-11-20 14:49:02.805097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.805267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.805274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.805614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.805621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.805783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.805790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.806052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.806059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.806408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.806415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.806748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.806756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.807049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.807056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.807359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.807366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.807709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.807718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.808017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.808025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.808189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.808197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.808552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.808562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.808849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.808855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.809164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.809171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.809456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.809463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.809632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.809639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.809946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.809953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.810356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.810364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.810681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.810688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.811082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.811089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.811390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.811398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.811633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.811641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.811800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.811807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.812104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.812112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.812508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.812516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.812907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.812914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.813231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.813238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.813455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.813462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.813632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.813639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.814005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.814013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.814308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.814318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.814634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.814642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.814966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.814973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.815144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.815151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.815519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-11-20 14:49:02.815526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-11-20 14:49:02.815931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.815938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.816253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.816261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.816593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.816601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.816894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.816902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.817143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.817150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.817467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.817475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.817547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.817553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.817845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.817852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.818180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.818188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.818397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.818405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.818587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.818594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.818760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.818767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.818954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.818960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.819135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.819142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.819280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.819287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.819577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.819585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.819911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.819920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.820236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.820243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.820308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.820315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.820575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.820583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.820745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.820753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.821090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.821097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.821480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.821487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.821689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.821697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.821998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.822006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.822353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.822361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.822688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.822695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.823001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.823008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.823348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.823355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.823721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.823728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.824062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-11-20 14:49:02.824070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-11-20 14:49:02.824194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.824201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.824457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.824464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.824631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.824638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.824933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.824940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.825209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.825216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.825487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.825496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.825797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.825804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.826137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.826144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.826504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.826511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.826776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.826783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.827081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.827088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.827253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.827260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.827427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.827434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.827635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.827643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.827944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.827951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.828270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.828277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.828638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.828645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.828929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.828935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.829242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.829251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.829571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.829578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.829971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.829979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.830298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.830306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.830605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.830612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.830928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.830935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.831093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.831099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.831407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.831417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.831794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.831801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.832088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.832095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.832397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.832404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.832702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.832709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.832974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.832981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.833345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.833352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.833649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.833656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.833796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.833803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.834093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.834100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.834390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.834398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.834689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.834696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-11-20 14:49:02.834739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-11-20 14:49:02.834746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.834930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.834937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.835305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.835312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.835557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.835564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.835891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.835898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.836214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.836221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.836380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.836387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.836697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.836704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.837012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.837019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.837348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.837356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.837654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.837661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.837877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.837885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.838096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.838104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.838286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.838293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.838582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.838589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.838883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.838891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.839223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.839230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.839605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.839612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.839931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.839938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.840250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.840258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.840426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.840433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.840715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.840721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.840932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.840940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.841211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.841218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.841518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.841525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.841724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.841731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.842000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.842007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.842184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.842191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.842365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.842374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.842677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.842684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.842972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.842980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.843161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.843168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.843497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.843505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.843847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.843855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.844035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.844042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.844346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.844354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.844655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.844662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-11-20 14:49:02.844940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-11-20 14:49:02.844948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.845250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.845258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.845584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.845591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.845744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.845750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.846133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.846140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.846418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.846425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.846749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.846758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.847097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.847105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.847296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.847304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.847505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.847512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.847809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.847816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.848110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.848118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.848400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.848408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.848738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.848746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.848907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.848913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.849195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.849202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.849564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.849572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.849753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.849761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.850090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.850097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.850326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.850333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.850675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.850683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.850839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.850846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.851181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.851188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.851497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.851505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.851790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.851798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.852110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.852118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.852287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.852295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.852514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.852522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.852824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.852831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.853112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.853119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.853287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.853295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.853597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.853607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.853914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.853922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.854211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.854219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.854522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.854530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.854834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.854841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.855048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.855055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.855410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-11-20 14:49:02.855418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-11-20 14:49:02.855727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.855734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.856097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.856105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.856423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.856431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.856756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.856764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.856932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.856941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.857253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.857261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.857447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.857454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.857771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.857779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.858076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.858082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.858350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.858358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.858680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.858687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.858868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.858875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.859201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.859210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.859395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.859403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.859694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.859701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.860031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.860038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.860185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.860192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.860549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.860557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.860906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.860914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.861193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.861201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.861516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.861524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.861714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.861721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.862059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.862066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.862223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.862231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.862591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.862600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.862766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.862774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.863053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.863060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.863370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.863378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.863535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.863543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.863926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.863933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.864116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.011 [2024-11-20 14:49:02.864288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.864296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.864488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.864495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.864872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.864879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.865125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.865132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.865368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.865376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.865559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.865566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.865775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.865783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-11-20 14:49:02.866064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-11-20 14:49:02.866071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.866440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.866448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.866792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.866800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.866993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.867001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.867317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.867326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.867640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.867647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.867928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.867936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.868186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.868194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.868503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.868511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.868787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.868797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.869173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.869180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.869521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.869528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.869810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.869817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.870123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.870131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.870442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.870452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.870757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.870765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.871063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.871071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.871400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.871407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.871586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.871595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.871780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.871788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.871976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.871984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.872158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.872165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.872525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.872534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.872825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.872833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.873130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.873138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.873443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.873452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.873608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.873615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.873910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.873917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.874182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.874190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.874396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.874403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.874771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.874778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.875126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-11-20 14:49:02.875133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-11-20 14:49:02.875434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.875442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.875614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.875622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.875775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.875783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.876053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.876061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.876383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.876392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.876682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.876690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.876878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.876886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.877078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.877086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.877422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.877431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.877700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.877707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.877879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.877886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.878018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.878025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.878377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.878385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.878721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.878729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.878916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.878923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.879219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.879227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.879511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.879519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.879837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.879847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.880139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.880147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.880317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.880326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.880647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.880654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.880940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.880948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.881248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.881256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.881434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.881442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.881660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.881667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.881946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.881954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.882288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.882295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.882601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.882609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.882903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.882912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.883200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.883208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.883513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.883520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.883806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.883814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.884112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.884119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.884409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.884417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.884721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.884729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.885034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.885042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.885384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.013 [2024-11-20 14:49:02.885392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.013 qpair failed and we were unable to recover it. 00:28:56.013 [2024-11-20 14:49:02.885722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.885730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.886012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.886020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.886214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.886222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.886395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.886403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.886583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.886591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.886895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.886902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.887221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.887228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.887431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.887438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.887773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.887780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.887971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.887979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.888277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.888285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.888420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.888426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.888607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.888615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.888893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.888900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.889015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.889022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.889355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.889362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.889440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.889447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.889636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.889644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.889987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.889995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.890327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.890336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.890636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.890646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.890830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.890837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.891207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.891214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.891505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.891513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.891804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.891812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.892201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.892208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.892504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.892511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.892818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.892825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.893152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.893159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.893357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.893365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.893569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.893576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.893890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.893898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.894085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.894092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.894387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.894395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.894704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.894711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.895000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.895006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.895385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.014 [2024-11-20 14:49:02.895392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.014 qpair failed and we were unable to recover it. 00:28:56.014 [2024-11-20 14:49:02.895721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.895728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.895919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.895926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.896269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.896277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.896625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.896632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.896796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.896803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.897167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.897174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.897374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.897381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.897557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.897563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.897915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.897923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.898214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.898222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.898532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.898539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.898856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.898863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.899152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.899159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.899472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.899480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.899762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.899769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.900067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.900075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.900255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.900263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.900451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.015 [2024-11-20 14:49:02.900478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.015 [2024-11-20 14:49:02.900486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.015 [2024-11-20 14:49:02.900493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.015 [2024-11-20 14:49:02.900499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.015 [2024-11-20 14:49:02.900584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.900592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.900950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.900958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.901135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.901143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.901461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.901469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.901767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.901775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.901904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.901910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.902082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.902088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.902196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:56.015 [2024-11-20 14:49:02.902220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:56.015 [2024-11-20 14:49:02.902421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.902428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.902426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:56.015 [2024-11-20 14:49:02.902427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:56.015 [2024-11-20 14:49:02.902739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.902746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.903035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.903043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.903205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.903212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.903485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.903492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.903828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.903835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.904004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.904012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.904333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.904340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.904646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.904653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.904921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.015 [2024-11-20 14:49:02.904928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.015 qpair failed and we were unable to recover it. 00:28:56.015 [2024-11-20 14:49:02.905319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.905327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.905608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.905615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.905946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.905952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.906160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.906167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.906351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.906359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.906655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.906663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.906855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.906862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.907188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.907195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.907502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.907509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.907860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.907867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.908059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.908067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.908235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.908243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.908568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.908576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.908859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.908866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.909160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.909168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.909525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.909532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.909692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.909699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.909889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.909896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.910195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.910203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.910518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.910526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.910715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.910723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.911123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.911129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.911299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.911307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.911618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.911625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.911959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.911966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.912254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.912261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.912457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.912465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.912825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.912833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.912986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.912994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.913212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.913219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.913383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.913390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.913728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.913735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.913907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.913913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.914128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.914136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.914409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.914417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.914591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.914598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.914921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.016 [2024-11-20 14:49:02.914928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.016 qpair failed and we were unable to recover it. 00:28:56.016 [2024-11-20 14:49:02.915092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.017 [2024-11-20 14:49:02.915099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.017 qpair failed and we were unable to recover it. 00:28:56.017 [2024-11-20 14:49:02.915428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.017 [2024-11-20 14:49:02.915436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.017 qpair failed and we were unable to recover it. 00:28:56.017 [2024-11-20 14:49:02.915609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.017 [2024-11-20 14:49:02.915616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.017 qpair failed and we were unable to recover it. 00:28:56.017 [2024-11-20 14:49:02.915971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.017 [2024-11-20 14:49:02.915978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.017 qpair failed and we were unable to recover it. 00:28:56.017 [2024-11-20 14:49:02.916303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.017 [2024-11-20 14:49:02.916311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.017 qpair failed and we were unable to recover it. 00:28:56.017 [2024-11-20 14:49:02.916637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.017 [2024-11-20 14:49:02.916643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.017 qpair failed and we were unable to recover it. 00:28:56.017 [2024-11-20 14:49:02.916936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.017 [2024-11-20 14:49:02.916944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.017 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.917202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.917210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.917511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.917518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.917824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.917831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.918000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.918009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.918188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.918194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.918554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.918561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.918871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.918879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.919215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.919222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.919506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.919514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.919851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.919860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.920158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.920165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.920360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.920367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.920549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.920557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.920939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.920947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.921264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.921272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.921556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.921565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.921863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.921870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.922170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.922177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.922410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.922419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.922755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.922762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.923055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.923063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.923308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.923316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.923377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.923386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.923731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.923738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.924008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.924015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.924195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.924202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.924540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.924548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.924833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.924841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.925167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.925175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.925473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.925481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.925656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.925663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.925867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.925874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.926207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.926214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.926392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.926399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.926568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.926576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.926750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.926757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.018 [2024-11-20 14:49:02.927064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.018 [2024-11-20 14:49:02.927071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.018 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.927392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.927400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.927682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.927986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.927993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.928285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.928293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.928615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.928622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.928665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.928671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.928956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.928963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.929260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.929268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.929534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.929541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.929923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.929930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.930115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.930121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.930453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.930461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.930756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.930763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.930951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.930958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.931202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.931209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.931386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.931394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.931691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.931699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.931858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.931865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.932147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.932154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.932335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.932343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.932627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.932633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.932807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.932814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.933023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.933030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.933306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.933313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.933481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.933488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.933800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.933809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.934056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.934063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.934248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.934256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.934541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.934548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.934865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.934872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.935071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.935078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.935262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.935271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.935580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.935587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.935944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.935950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.936276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.936283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.936659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.936667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.936711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.936718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.019 [2024-11-20 14:49:02.937066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.019 [2024-11-20 14:49:02.937073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.019 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.937371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.937379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.937693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.937701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.937897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.937905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.938062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.938068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.938383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.938390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.938772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.938779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.938940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.938947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.939276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.939283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.939494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.939501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.939858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.939866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.940154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.940161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.940330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.940337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.940698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.940705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.940998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.941005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.941294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.941302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.941495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.941503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.941838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.941846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.942170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.942177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.942519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.942526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.942699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.942706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.942873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.942880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.943227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.943234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.943540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.943549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.943826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.943833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.944052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.944059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.944395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.944402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.944598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.944605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.944878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.944887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.945177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.945184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.945229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.945236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.945456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.945462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.945757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.945765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.946041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.946048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.946233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.946241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.946417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.946424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.020 [2024-11-20 14:49:02.946725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.020 [2024-11-20 14:49:02.946733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.020 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.947022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.947029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.947222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.947229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.947396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.947403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.947697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.947704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.947965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.947973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.948308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.948316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.948648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.948654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.948950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.948957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.949322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.949329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.949539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.949546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.949731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.949738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.950067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.950074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.950235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.950242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.950415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.950422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.950593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.950600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.950776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.950783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.950990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.950997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.951303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.951311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.951590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.951597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.951759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.951766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.952043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.952050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.952373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.952380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.952762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.952769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.953089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.953096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.953277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.953284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.953654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.953661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.953991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.953998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.954300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.954308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.954649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.954656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.954996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.955003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.955297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.955304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.955494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.955502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.955861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.955869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.956170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.956177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.956474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.956481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.956660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.956668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.021 [2024-11-20 14:49:02.956886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.021 [2024-11-20 14:49:02.956893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.021 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.957174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.957180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.957465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.957472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.957795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.957802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.957974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.957981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.958139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.958146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.958333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.958341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.958671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.958678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.959014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.959021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.959304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.959312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.959653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.959660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.959976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.959983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.960158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.960165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.960395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.960402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.960561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.960567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.960940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.960947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.961259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.961266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.961445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.961451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.961636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.961643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.961814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.961820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.962036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.962043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.962337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.962345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.962528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.962535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.962820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.962827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.963109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.963117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.963423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.963431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.963464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.963471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.963804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.963811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.964003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.964009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.964190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.964197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.964361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.964368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.964643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.022 [2024-11-20 14:49:02.964650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.022 qpair failed and we were unable to recover it. 00:28:56.022 [2024-11-20 14:49:02.964861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.964869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.965078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.965085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.965247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.965254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.965554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.965563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.965732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.965739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.966019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.966026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.966346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.966353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.966533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.966540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.966830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.966836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.967089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.967097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.967364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.967371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.967555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.967562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.967770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.967777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.967963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.967970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.968266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.968273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.968559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.968566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.968735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.968741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.968783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.968790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.969091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.969097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.969284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.969291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.969605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.969613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.969933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.969940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.970219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.970226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.970416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.970423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.970779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.970785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.970963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.970970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.971175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.971181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.971363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.971370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.971668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.971676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.972024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.972031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.972211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.972218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.972574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.972581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.972848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.972856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.973166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.973173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.973358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.973364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.973528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.973535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.023 qpair failed and we were unable to recover it. 00:28:56.023 [2024-11-20 14:49:02.973831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.023 [2024-11-20 14:49:02.973837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.974013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.974020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.974332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.974339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.974670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.974678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.975037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.975044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.975356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.975363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.975683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.975690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.975834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.975842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.976148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.976155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.976470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.976477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.976795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.976802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.977122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.977129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.977417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.977425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.977635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.977642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.977954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.977961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.978282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.978289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.978465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.978472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.978634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.978640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.978969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.978975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.979166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.979174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.979483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.979490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.979671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.979678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.979939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.979946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.980306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.980313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.980609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.980615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.980648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.980654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.980993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.981000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.981296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.981303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.981594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.981601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.981779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.981786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.982131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.982138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.982423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.982430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.982615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.982621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.982833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.982841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.982995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.983003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.983161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.983168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.983404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.983411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.024 qpair failed and we were unable to recover it. 00:28:56.024 [2024-11-20 14:49:02.983701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.024 [2024-11-20 14:49:02.983708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.984008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.984014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.984303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.984310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.984350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.984357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.984523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.984530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.984713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.984720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.985035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.985043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.985353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.985360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.985662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.985669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.985836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.985843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.986179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.986188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.986510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.986517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.986689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.986696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.987075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.987083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.987372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.987379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.987755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.987761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.988082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.988088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.988380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.988387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.988570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.988578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.988737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.988744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.989069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.989076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.989389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.989397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.989698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.989706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.990019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.990025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.990314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.990322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.990639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.990646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.990952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.990958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.991257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.991264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.991460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.991467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.991823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.991830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.992150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.992157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.992494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.992502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.992691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.992698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.992867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.992874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.993035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.993041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.993378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.993385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.993569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.993576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.993758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.025 [2024-11-20 14:49:02.993765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.025 qpair failed and we were unable to recover it. 00:28:56.025 [2024-11-20 14:49:02.994104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.994110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.994388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.994395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.994432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.994438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.994601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.994608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.994973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.994980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.995280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.995287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.995473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.995480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.995634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.995641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.995681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.995689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.995768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.995774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.996071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.996077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.996380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.996387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.996688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.996697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.997019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.997026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.997318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.997325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.997619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.997625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.997923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.997930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.998245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.998252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.998559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.998567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.998953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.998960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.999130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.999136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.999294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.999301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.999526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.999532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:02.999841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:02.999848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.000043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.000049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.000358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.000366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.000542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.000548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.000879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.000887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.001179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.001186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.001477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.001485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.001666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.001672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.001833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.001840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.002152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.002159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.002493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.002500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.002792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.002800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.003152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.003159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.003361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.003368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.026 qpair failed and we were unable to recover it. 00:28:56.026 [2024-11-20 14:49:03.003694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.026 [2024-11-20 14:49:03.003701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.003864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.003871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.004085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.004094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.004280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.004288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.004604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.004611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.004911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.004919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.005113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.005120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.005311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.005318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.005619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.005626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.005820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.005827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.006156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.006164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.006447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.006455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.006757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.006765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.007068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.007075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.007256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.007263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.007598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.007606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.007769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.007776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.007939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.007946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.008257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.008265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.008417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.008425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.008782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.008789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.009091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.009098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.009343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.009352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.009663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.009670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.009953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.009961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.010176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.010185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.010360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.010367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.010650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.010657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.010967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.010974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.011317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.011325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.011649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.011658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.011988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.027 [2024-11-20 14:49:03.011996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.027 qpair failed and we were unable to recover it. 00:28:56.027 [2024-11-20 14:49:03.012165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.012172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.012520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.012528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.012839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.012846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.013043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.013050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.013365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.013372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.013532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.013539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.013925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.013933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.014256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.014264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.014423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.014430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.014809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.014816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.015099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.015109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.015153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.015160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.015581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.015589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.015911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.015918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.016231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.016238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.016525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.016533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.016841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.016848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.017166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.017173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.017479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.017487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.017643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.017651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.017940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.017948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.018122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.018129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.018436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.018444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.018623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.018630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.018933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.018941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.019235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.019243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.019536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.019543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.019700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.019706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.019743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.019750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.020041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.020049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.020363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.020371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.020658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.020666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.020984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.020991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.021309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.021317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.021350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.021357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.021683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.021690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.022001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.022008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.028 [2024-11-20 14:49:03.022186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.028 [2024-11-20 14:49:03.022193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.028 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.022501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.022509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.022742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.022749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.023047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.023055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.023356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.023363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.023556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.023563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.023869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.023877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.024181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.024189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.024436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.024443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.024756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.024763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.025156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.025163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.025546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.025554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.025832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.025840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.025877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.025886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.026201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.026209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.026507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.026515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.026835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.026844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.027118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.027126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.027463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.027470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.027652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.027659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.027938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.027946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.028258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.028265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.028692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.028700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.029000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.029007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.029292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.029300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.029483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.029491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.029727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.029734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.030094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.030102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.030485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.030493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.030782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.030791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.031072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.031080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.031387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.031394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.031599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.031606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.031948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.031956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.032237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.032248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.032434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.032441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.032617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.032625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.032783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.032790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.029 qpair failed and we were unable to recover it. 00:28:56.029 [2024-11-20 14:49:03.032960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.029 [2024-11-20 14:49:03.032967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.033133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.033141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.033449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.033458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.033496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.033503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.033630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.033638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.033985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.033992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.034290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.034297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.034621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.034629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.034808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.034814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.035108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.035116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.035386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.035393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.035577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.035585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.035962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.036029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.036499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.036568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.036919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.036928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.036969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.036979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.037268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.037276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.030 [2024-11-20 14:49:03.037678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.030 [2024-11-20 14:49:03.037685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.030 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.038095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.038104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.038414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.038422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.038644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.038651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.038875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.038882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.039177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.039185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.039365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.039372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.039701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.039708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.040020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.040028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.040223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.040231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.040638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.040647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.040956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.040966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.041136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.041143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.041338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.041346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.041607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.041615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.041813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.041820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.041990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.041998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.042394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.042402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.042694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.042702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.042916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.042924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.043216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.043224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.043531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.043539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.043866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.043873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.043913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.043920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.044091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.044099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.044292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.044301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.044477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.044484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.044770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.044778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.044988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.044995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.045032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.045041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.045271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.045279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.045465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.045474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.045764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.045771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.046074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.046081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.046256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.046263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.308 qpair failed and we were unable to recover it. 00:28:56.308 [2024-11-20 14:49:03.046564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.308 [2024-11-20 14:49:03.046571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.046729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.046737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.047032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.047039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.047321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.047334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.047504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.047511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.047701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.047708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.047878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.047885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.048184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.048191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.048390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.048398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.048748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.048755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.049046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.049053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.049369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.049377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.049688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.049695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.049865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.049873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.050194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.050202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.050354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.050362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.050677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.050685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.051007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.051015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.051332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.051339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.051569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.051575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.051898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.051904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.052194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.052201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.052530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.052536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.052833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.052840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.052983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.052990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.053147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.053153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.053328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.053335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.053640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.053647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.053869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.053876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.054190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.054197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.054438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.054445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.054767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.054774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.054952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.309 [2024-11-20 14:49:03.054959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.309 qpair failed and we were unable to recover it. 00:28:56.309 [2024-11-20 14:49:03.055314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.055321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.055493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.055500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.055875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.055882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.056185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.056192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.056579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.056586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.056857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.056863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.056998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.057005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.057251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.057258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.057564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.057571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.057750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.057758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.058062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.058071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.058250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.058257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.058592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.058599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.058956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.058963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.059127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.059134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.059562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.059569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.059875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.059882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.060241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.060251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.060443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.060451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.060607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.060614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.060795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.060801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.060965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.060972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.061275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.061281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.061589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.061596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.061898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.061905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.061944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.061950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.062103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.062109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.062320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.062327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.062677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.062684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.062838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.062845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.063067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.063074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.063432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.063440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.063477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.063484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.063655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.063662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.063959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.063966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.064251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.064259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.064546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.064553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.310 [2024-11-20 14:49:03.064729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.310 [2024-11-20 14:49:03.064736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.310 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.064967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.064974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.065256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.065264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.065610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.065617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.065849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.065856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.066138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.066146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.066527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.066534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.066839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.066846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.067024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.067031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.067307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.067314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.067592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.067600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.067776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.067783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.068118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.068125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.068301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.068310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.068627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.068634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.068831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.068838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.069100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.069107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.069466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.069474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.069757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.069764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.070077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.070084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.070342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.070349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.070668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.070675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.070871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.070878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.071068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.071075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.071365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.071372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.071693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.071700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.071900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.071907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.072215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.072221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.072386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.072393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.072743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.072749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.072786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.072792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.072963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.072970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.073278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.073286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.073573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.311 [2024-11-20 14:49:03.073581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.311 qpair failed and we were unable to recover it. 00:28:56.311 [2024-11-20 14:49:03.073728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.073736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.073997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.074004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.074304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.074311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.074664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.074670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.074974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.074980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.075274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.075281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.075324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.075331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.075684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.075692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.075845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.075852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.076042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.076050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.076092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.076099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.076421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.076429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.076748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.076756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.076933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.076939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.076986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.076993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.077186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.077193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.077486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.077493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.077654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.077660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.077962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.077969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.078148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.078157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.078562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.078569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.078733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.078740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.078978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.078985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.079168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.079174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.079509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.079517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.079810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.079817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.080116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.080123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.080428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.080435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.080747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.080753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.080947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.080954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.081172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.081179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.312 qpair failed and we were unable to recover it. 00:28:56.312 [2024-11-20 14:49:03.081444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.312 [2024-11-20 14:49:03.081452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.081492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.081499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.081827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.081834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.082122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.082128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.082530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.082538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.082854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.082862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.083029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.083036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.083394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.083402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.083590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.083597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.083920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.083926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.084284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.084292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.084597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.084604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.084766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.084773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.084949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.084956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.085145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.085152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.085473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.085480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.085771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.085778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.085953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.085959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.086356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.086364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.086633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.086640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.086868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.086875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.087126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.087133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.087416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.087423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.087603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.087610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.087792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.087799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.088163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.088170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.088476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.088484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.088695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.088702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.088897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.088906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.089195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.089202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.089378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.089385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.089585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.089592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.089890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.089897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.090078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.090085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.090369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.090377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.090551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.090558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.090723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.090730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.091075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.091082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.313 qpair failed and we were unable to recover it. 00:28:56.313 [2024-11-20 14:49:03.091117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.313 [2024-11-20 14:49:03.091125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.091435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.091442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.091719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.091725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.091874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.091880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.092248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.092256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.092524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.092531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.092721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.092728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.093090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.093097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.093395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.093402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.093563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.093570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.093760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.093766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.094112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.094118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.094423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.094429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.094720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.094728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.095064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.095072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.095253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.095260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.095449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.095456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.095768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.095775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.096058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.096065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.096366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.096373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.096729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.096737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.097020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.097027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.097207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.097214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.097574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.097581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.097735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.097742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.098109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.098115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.098299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.098306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.098583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.098590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.098917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.098924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.099212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.099219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.099394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.099404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.099760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.099767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.100049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.100056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.100350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.100357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.100538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.100546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.100896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.100903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.101193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.101200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.101447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.101454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.314 [2024-11-20 14:49:03.101747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.314 [2024-11-20 14:49:03.101754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.314 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.101927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.101934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.102155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.102162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.102457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.102464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.102780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.102786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.102963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.102971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.103149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.103156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.103448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.103456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.103772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.103779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.104069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.104076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.104376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.104383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.104704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.104711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.105048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.105055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.105232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.105239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.105607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.105615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.105929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.105937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.106253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.106260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.106567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.106574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.106872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.106879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.107167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.107174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.107487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.107494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.107807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.107815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.108124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.108130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.108306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.108313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.108664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.108671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.108850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.108858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.109129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.109136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.109472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.109480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.109761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.109768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.109911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.109918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.110070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.110077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.110292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.110299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.110512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.110521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.110826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.110832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.111133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.111139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.111342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.111349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.111713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.111720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.112032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.315 [2024-11-20 14:49:03.112038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.315 qpair failed and we were unable to recover it. 00:28:56.315 [2024-11-20 14:49:03.112353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.112360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.112528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.112535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.112822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.112829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.113108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.113115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.113449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.113457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.113620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.113627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.113832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.113839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.113985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.113992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.114268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.114275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.114456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.114463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.114801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.114808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.114979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.114986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.115283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.115291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.115602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.115609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.115646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.115652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.116013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.116020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.116271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.116278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.116462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.116469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.116766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.116773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.117062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.117070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.117228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.117235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.117581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.117588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.117870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.117877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.118062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.118069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.118282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.118289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.118568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.118575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.118747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.118754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.119103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.119110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.119480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.119487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.119827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.119834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.119866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.119873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.120176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.120183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.120503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.120510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.120862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.120868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.121043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.121055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.121238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.121248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.121451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.121457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.121710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.316 [2024-11-20 14:49:03.121717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.316 qpair failed and we were unable to recover it. 00:28:56.316 [2024-11-20 14:49:03.122040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.122047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.122209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.122216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.122369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.122376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.122550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.122557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.122839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.122846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.123135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.123142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.123424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.123431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.123719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.123726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.124021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.124027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.124328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.124335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.124650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.124656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.124881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.124888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.125198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.125205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.125562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.125569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.125898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.125905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.126189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.126196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.126369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.126376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.126671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.126678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.127019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.127025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.127296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.127303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.127635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.127642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.127944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.127951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.128286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.128293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.128556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.128563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.128728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.128735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.129043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.129050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.129367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.129375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.129544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.129550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.129749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.129755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.129911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.129918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.130233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.130240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.317 [2024-11-20 14:49:03.130487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.317 [2024-11-20 14:49:03.130495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.317 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.130800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.130806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.131207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.131214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.131555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.131563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.131905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.131911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.132190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.132196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.132378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.132385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.132598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.132605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.132796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.132803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.132840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.132846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.133088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.133095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.133272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.133282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.133667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.133674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.134001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.134008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.134306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.134313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.134397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.134403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.134701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.134708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.134906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.134914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.135073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.135080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.135326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.135333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.135538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.135545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.135729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.135735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.135773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.135780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.135958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.135965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.136136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.136143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.136319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.136327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.136631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.136638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.136930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.136938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.137198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.137205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.137545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.137552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.137820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.137827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.137981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.137989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.138267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.138276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.138592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.138599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.138962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.138969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.139262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.139270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.139585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.139591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.139755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.139762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.140033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.318 [2024-11-20 14:49:03.140039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.318 qpair failed and we were unable to recover it. 00:28:56.318 [2024-11-20 14:49:03.140267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.140274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.140639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.140646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.140798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.140805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.141000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.141007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.141323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.141330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.141370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.141377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.141685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.141692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.141859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.141866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.142048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.142055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.142450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.142457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.142725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.142732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.142921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.142927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.143255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.143263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.143434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.143441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.143615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.143622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.143782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.143789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.144156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.144163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.144311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.144318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.144603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.144610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.144894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.144900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.145057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.145065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.145407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.145415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.145739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.145747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.146073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.146081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.146379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.146386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.146686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.146693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.146911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.146918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.147110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.147116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.147427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.147435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.147717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.147724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.147980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.147987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.148174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.148181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.148518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.148525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.148565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.148573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.149047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.149054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.149383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.149391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.149573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.149580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.149892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.319 [2024-11-20 14:49:03.149899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.319 qpair failed and we were unable to recover it. 00:28:56.319 [2024-11-20 14:49:03.150202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.150209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.150503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.150510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.150696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.150703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.150901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.150909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.151094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.151100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.151467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.151475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.151779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.151786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.151913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.151920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.152092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.152099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.152412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.152419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.152596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.152603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.152917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.152923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.153239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.153249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.153529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.153537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.153851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.153858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.154158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.154164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.154196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.154202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.154558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.154565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.154854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.154861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.155079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.155085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.155490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.155497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.155763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.155770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.155963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.155970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.156172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.156179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.156365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.156372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.156552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.156559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.156811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.156819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.156949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.156956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.157253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.157260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.157597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.157604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.157883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.157890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.158191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.158199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.158268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.158275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.158551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.158558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.158835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.158841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.159179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.159187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.159362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.159370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.159687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.320 [2024-11-20 14:49:03.159694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.320 qpair failed and we were unable to recover it. 00:28:56.320 [2024-11-20 14:49:03.159878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.159885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.159926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.159933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.160131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.160137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.160469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.160476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.160790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.160797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.160981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.160988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.161290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.161297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.161467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.161475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.161763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.161770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.162060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.162067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.162203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.162210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.162385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.162392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.162696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.162703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.162895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.162902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.163225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.163232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.163554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.163562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.163884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.163891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.164031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.164038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.164238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.164248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.164455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.164461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.164771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.164779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.165073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.165079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.165384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.165391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.165696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.165704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.165990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.165997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.166372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.166379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.166693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.166699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.166857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.166864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.167231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.167237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.167439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.167446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.167761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.167769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.167922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.167929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.168138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.168145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.168331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.168337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.168658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.168665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.168854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.168861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.169219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.169226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.169624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.321 [2024-11-20 14:49:03.169633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.321 qpair failed and we were unable to recover it. 00:28:56.321 [2024-11-20 14:49:03.169813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.169820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.170031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.170037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.170362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.170369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.170700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.170707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.171024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.171031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.171203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.171210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.171403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.171410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.171720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.171727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.172003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.172011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.172171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.172177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.172443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.172450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.172635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.172642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.172679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.172685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.173012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.173019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.173304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.173311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.173736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.173743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.173880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.173887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.174117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.174124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.174407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.174414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.174580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.174587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.174826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.174833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.175136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.175143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.175533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.175540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.175811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.175818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.176108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.176115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.176497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.176504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.176816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.176823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.177101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.177107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.177481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.177488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.177804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.177812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.177984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.177991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.178169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.178176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.178531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.322 [2024-11-20 14:49:03.178538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.322 qpair failed and we were unable to recover it. 00:28:56.322 [2024-11-20 14:49:03.178738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.178745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.178948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.178955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.179208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.179215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.179519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.179526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.179817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.179824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.180148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.180155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.180436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.180446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.180585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.180592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.180744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.180750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.181046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.181053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.181243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.181252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.181594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.181601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.181934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.181941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.182105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.182111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.182411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.182418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.182751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.182758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.182874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.182880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.183042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.183049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.183211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.183218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.183537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.183545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.183764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.183771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.184106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.184113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.184163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.184169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.184422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.184429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.184732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.184739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.185039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.185045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.185193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.185199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.185477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.185485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.185673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.185680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.185981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.185988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.186348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.186355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.186651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.186658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.186833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.186841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.187133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.187140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.187474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.187482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.187808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.187815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.187952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.187958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.188193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.323 [2024-11-20 14:49:03.188200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.323 qpair failed and we were unable to recover it. 00:28:56.323 [2024-11-20 14:49:03.188241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.188251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.188402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.188409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.188717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.188724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.188904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.188911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.189264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.189272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.189589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.189597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.189922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.189929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.190101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.190109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.190288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.190297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.190572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.190579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.190950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.190956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.191242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.191252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.191576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.191584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.191754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.191761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.192024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.192030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.192210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.192217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.192301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.192308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.192582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.192588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.192870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.192877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.193061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.193067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.193103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.193110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.193423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.193431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.193776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.193783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.193945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.193952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.194086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.194093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.194413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.194421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.194741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.194748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.195055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.195062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.195269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.195276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.195622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.195629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.195816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.195824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.195991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.195998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.196179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.196187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.196478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.196485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.196775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.196782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.197131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.197138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.197303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.197310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.197618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.324 [2024-11-20 14:49:03.197624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.324 qpair failed and we were unable to recover it. 00:28:56.324 [2024-11-20 14:49:03.197820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.197826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.198155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.198162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.198518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.198525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.198834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.198842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.199030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.199037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.199343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.199351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.199658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.199665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.199848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.199856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.200205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.200212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.200522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.200529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.200710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.200719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.200924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.200931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.201281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.201288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.201586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.201593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.201883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.201890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.202001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.202007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.202306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.202314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.202640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.202647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.202808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.202815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.203122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.203129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.203483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.203490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.203664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.203671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.203842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.203849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.204123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.204130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.325 qpair failed and we were unable to recover it. 00:28:56.325 [2024-11-20 14:49:03.204461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.325 [2024-11-20 14:49:03.204468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.204679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.204687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.204727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.204734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.205035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.205042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.205331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.205338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.205659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.205666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.205964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.205971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.206146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.206153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.206438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.206446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.206608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.206615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.206887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.206894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.207085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.207091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.207409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.207416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.207587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.207594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.207880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.207887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.208060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.208066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.208372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.208380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.208545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.208552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.208909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.208916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.209291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.209299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.209614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.209621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.209934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.209940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.210240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.210251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.210418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.210425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.210721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.210728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.210917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.210924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.211238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.211250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.211412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.211418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.211796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.211803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.212119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.212126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.212304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.212315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.212592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.212598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.212889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.212895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.213066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.213072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.213475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.213483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.213638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.213645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.326 [2024-11-20 14:49:03.213854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.326 [2024-11-20 14:49:03.213861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.326 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.214165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.214173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.214553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.214561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.214736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.215095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.215101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.215241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.215252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.215535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.215541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.215840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.215846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.216014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.216020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.216239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.216251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.216535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.216542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.216698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.216704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.216922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.216929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.217101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.217108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.217423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.217431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.217601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.217607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.217916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.217923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.218079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.218087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.218246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.218253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.218552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.218559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.218861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.218869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.219197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.219204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.219448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.219455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.219784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.219792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.220080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.220087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.220175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.220182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.220233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.220240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.220427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.220434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.220614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.220621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.220817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.220823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.221089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.221098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.221281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.221289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.221625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.221632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.222019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.222026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.327 qpair failed and we were unable to recover it. 00:28:56.327 [2024-11-20 14:49:03.222307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.327 [2024-11-20 14:49:03.222314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.222622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.222629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.222859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.222866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.223036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.223043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.223163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.223169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.223527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.223534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.223854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.223861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.224192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.224199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.224493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.224500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.224797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.224804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.225110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.225117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.225466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.225473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.225790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.225797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.226121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.226127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.226324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.226331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.226720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.226726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.227017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.227023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.227328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.227335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.227694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.227700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.228073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.228079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.228442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.228449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.228617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.228624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.228924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.228931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.229228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.229236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.229629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.229636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.229970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.229977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.230270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.230278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.230601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.230608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.230904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.230910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.230957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.230964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.231132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.231139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.231448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.231455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.231726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.231732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.232041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.232047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.232369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.232376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.232678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.232685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.232999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.233007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.233192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.233199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.328 [2024-11-20 14:49:03.233528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.328 [2024-11-20 14:49:03.233535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.328 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.233689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.233696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.234089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.234096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.234252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.234259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.234646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.234653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.234968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.234974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.235300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.235307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.235698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.235705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.236003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.236009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.236079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.236085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.236395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.236402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.236691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.236698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.236996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.237003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.237325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.237333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.237507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.237514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.237792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.237799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.238165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.238172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.238476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.238483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.238818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.238824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.239143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.239149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.239319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.239326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.239620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.239627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.239782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.239789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.240095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.240102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.240433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.240440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.240734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.240741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.241060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.241067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.241276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.241283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.241617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.241624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.241780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.241786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.241969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.241976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.242159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.242166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.242502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.242509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.242814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.242820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.243113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.243119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.243430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.243437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.243742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.243749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.244072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.244079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.329 [2024-11-20 14:49:03.244373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.329 [2024-11-20 14:49:03.244382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.329 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.244536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.244542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.244927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.245226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.245232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.245414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.245421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.245767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.245774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.246059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.246065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.246260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.246267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.246587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.246593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.246900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.246907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.247261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.247267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.247423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.247429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.247759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.247766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.247960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.247966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.248117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.248124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.248319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.248326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.248494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.248501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.248659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.248666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.248965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.248972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.249264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.249271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.249599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.249606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.249756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.249763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.250099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.250106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.250397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.250405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.250713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.250720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.250896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.250903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.251248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.251256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.251427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.251434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.251628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.251635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.251807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.251815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.251988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.251995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.252351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.252358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.252644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.252651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.252833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.252840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.253197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.253204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.253502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.253509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.253834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.253841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.254124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.254131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.330 [2024-11-20 14:49:03.254410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.330 [2024-11-20 14:49:03.254418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.330 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.254599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.254606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.254889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.254898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.255197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.255204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.255513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.255522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.255691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.255698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.255866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.255873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.255911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.255918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.256189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.256196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.256358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.256365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.256856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.256937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.257446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.257529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.257796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.257829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.258128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.258136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.258412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.258420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.258767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.258773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.258945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.258953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.259238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.259247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.259556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.259563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.259904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.259910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.260074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.260081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.260203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.260210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.260500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.260507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.260808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.260814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.261100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.261108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.261490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.261497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.261677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.261684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.261865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.261872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.262202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.262209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.262597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.262667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.263021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.263036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.263220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.263234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.263425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.263440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.263545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.331 [2024-11-20 14:49:03.263558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.331 qpair failed and we were unable to recover it. 00:28:56.331 [2024-11-20 14:49:03.263721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.263735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.264042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.264055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.264362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.264376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.264563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.264576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.264928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.264942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.265282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.265296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.265492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.265506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.265714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.265727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.266050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.266064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.266378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.266392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.266766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.266779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.266836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.266849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.267017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.267029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.267191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.267203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.267508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.267522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.267728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.267741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.268079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.268092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.268428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.268443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.268772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.268785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.269111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.269125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.269296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.269310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.269632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.269639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.270001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.270008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.270046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.270053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.270233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.270240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.270566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.270573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.270874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.270882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.271148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.271156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.271461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.271469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.271767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.271774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.271973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.271981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.272141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.272148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.272534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.272541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.272857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.272865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.273195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.273202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.273592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.273604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.273910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.273917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.274085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.332 [2024-11-20 14:49:03.274094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.332 qpair failed and we were unable to recover it. 00:28:56.332 [2024-11-20 14:49:03.274389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.274397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.274523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.274532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.274850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.274857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.275183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.275190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.275367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.275375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.275657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.275664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.275880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.275887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.276181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.276188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.276358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.276365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.276660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.276667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.277004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.277011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.277310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.277317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.277621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.277628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.277811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.277818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.278202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.278209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.278383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.278390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.278563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.278570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.278859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.278866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.279039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.279046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.279212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.279220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.279403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.279410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.279776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.279784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.279828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.279834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.279983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.279990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.280319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.280326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.280509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.280515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.280736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.280743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.280964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.280971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.281164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.281170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.281356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.281363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.281661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.281668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.281996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.282002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.282297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.282305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.282346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.282353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.282681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.282688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.282726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.282733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.282871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.282878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.283205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.333 [2024-11-20 14:49:03.283214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.333 qpair failed and we were unable to recover it. 00:28:56.333 [2024-11-20 14:49:03.283428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.283436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.283620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.283627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.283919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.283926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.284252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.284259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.284449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.284456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.284797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.284804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.285117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.285125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.285410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.285418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.285642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.285649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.285963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.285970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.286272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.286280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.286583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.286591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.286956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.286963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.287128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.287135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.287430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.287438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.287623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.287630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.287973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.287981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.288154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.288160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.288337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.288352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.288657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.288664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.288843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.288850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.289199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.289206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.289361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.289369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.289646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.289653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.289792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.289800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.290084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.290091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.290446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.290453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.290745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.290753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.290946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.290953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.291137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.291144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.291440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.291447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.291765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.291772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.292075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.292083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.292284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.292291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.292576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.292583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.292950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.292957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.293114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.293121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.334 [2024-11-20 14:49:03.293285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.334 [2024-11-20 14:49:03.293293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.334 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.293651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.293658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.293958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.293967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.294156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.294164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.294367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.294374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.294740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.294747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.295054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.295061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.295356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.295363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.295540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.295546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.295893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.295900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.296197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.296204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.296605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.296612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.296887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.296894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.296945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.296951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.297295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.297303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.297493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.297500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.297874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.297881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.298178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.298184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.298509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.298516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.298797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.298803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.299015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.299021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.299204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.299211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.299486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.299493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.299806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.299813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.300101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.300108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.300300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.300307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.300545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.300552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.300883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.300890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.300949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.300956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.301238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.301249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.301453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.301460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.301772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.301778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.302075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.302082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.302282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.302289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.302634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.302641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.302973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.302980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.303279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.303286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.303593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.303599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.335 [2024-11-20 14:49:03.303905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.335 [2024-11-20 14:49:03.303911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.335 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.304187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.304194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.304378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.304385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.304789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.304795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.304953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.304962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.305312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.305319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.305489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.305496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.305844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.305851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.306130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.306137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.306466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.306473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.306851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.306857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.307167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.307173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.307421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.307429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.307614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.307621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.307961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.307967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.308133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.308139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.308423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.308430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.308747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.308754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.308966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.308972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.309230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.309236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.309563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.309570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.309736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.309744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.309924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.309931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.310115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.310121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.310415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.310422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.310459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.310465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.310502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.310508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.310804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.310811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.311103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.311110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.311290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.311297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.311637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.311644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.311811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.311817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.311974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.311982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.336 qpair failed and we were unable to recover it. 00:28:56.336 [2024-11-20 14:49:03.312028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.336 [2024-11-20 14:49:03.312035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.312345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.312353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.312504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.312512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.312690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.312697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.313029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.313036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.313210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.313218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.313530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.313537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.313894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.313900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.314176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.314183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.314487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.314494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.314653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.314660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.314968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.314976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.315138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.315145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.315179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.315186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.315337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.315344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.315521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.315528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.315815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.315822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.316148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.316155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.316358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.316366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.316641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.316648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.316965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.316972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.317255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.317263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.317562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.317569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.317890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.317897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.318190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.318197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.318496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.318503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.318658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.318664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.319035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.319042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.319331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.319338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.319658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.319665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.319966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.319973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.320162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.320169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.320516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.320523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.320839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.320846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.321125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.321132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.321205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.321211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.321360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.321367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.321693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.321700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.337 qpair failed and we were unable to recover it. 00:28:56.337 [2024-11-20 14:49:03.321905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.337 [2024-11-20 14:49:03.321914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.322103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.322110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.322356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.322363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.322644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.322651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.322839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.322846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.323001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.323008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.323189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.323196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.323505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.323512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.323667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.323673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.323912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.323919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.324217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.324224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.324520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.324527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.324829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.324836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.325134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.325144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.325433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.325440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.325741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.325748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.325781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.325788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.326301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.326384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.326655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.326689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.326874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.326884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.327260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.327268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.327578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.327584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.327883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.327890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.327951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.327958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.328022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.328029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c94000b90 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.328257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.328291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.328723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.328762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.328997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.329011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.329253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.329264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.329596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.329634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.329958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.329971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.330440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.330479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.330675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.330688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.331022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.331033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.331372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.331383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.331712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.331722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.331900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.331910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.332106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.332116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.338 [2024-11-20 14:49:03.332285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.338 [2024-11-20 14:49:03.332296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.338 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.332474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.332485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.332530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.332544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.332891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.332901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.333076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.333087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.333411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.333422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.333585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.333595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.333868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.333878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.334083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.334093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.334278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.334288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.334596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.334606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.334804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.334813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.335159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.335169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.335358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.335368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.335731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.335740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.336050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.336060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.336266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.336277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.336562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.336572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.336765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.336775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.336932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.336942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.337233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.337243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.337399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.337409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.337739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.337749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.338065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.338075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.338247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.338258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.338489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.338499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.338691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.338700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.338891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.338901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.339061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.339071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.339355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.339368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.339694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.339704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.339880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.339890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.340173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.340183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.340356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.340367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.340566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.340579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.340842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.340852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.341115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.341125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.341176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.339 [2024-11-20 14:49:03.341186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.339 qpair failed and we were unable to recover it. 00:28:56.339 [2024-11-20 14:49:03.341504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.341514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.341820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.341830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.342123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.342132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.342534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.342545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.342705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.342715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.343073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.343083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.343273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.343283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.343624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.343634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.343793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.343803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.344099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.344108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.344298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.344308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.344490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.344500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.344864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.344874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.345062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.345072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.345277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.345287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.345542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.345552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.345727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.345736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.346023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.346033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.346207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.346220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.346553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.346563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.346831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.346841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.347157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.347167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.347338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.347348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.347553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.347564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.347900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.347910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.348082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.348092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.348394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.348404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.348708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.348719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.349020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.349030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.349309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.349319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.349641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.349650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.349974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.349984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.350293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.340 [2024-11-20 14:49:03.350303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.340 qpair failed and we were unable to recover it. 00:28:56.340 [2024-11-20 14:49:03.350493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.341 [2024-11-20 14:49:03.350502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.341 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.350689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.350700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.350988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.350998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.351281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.351291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.351464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.351474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.351794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.351803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.352115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.352125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.352288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.352298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.352641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.352651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.352954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.352964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.353335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.353345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.353550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.353560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.353843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.353852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.354179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.354189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.354242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.354256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.354539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.354548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.354865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.354875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.355167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.355177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.355350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.355360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.355707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.355717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.355895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.355907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.356217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.356226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.356502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.356512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.356832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.356841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.357123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.357133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.357458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.357469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.357794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.357806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.358070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.358080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.358395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.608 [2024-11-20 14:49:03.358405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.608 qpair failed and we were unable to recover it. 00:28:56.608 [2024-11-20 14:49:03.358710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.358720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.359017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.359026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.359259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.359270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.359448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.359459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.359769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.359778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.360068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.360078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.360482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.360492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.360806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.360815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.360983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.360993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.361215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.361225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.361461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.361471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.361789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.361799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.361986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.361996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.362191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.362201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.362407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.362417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.362462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.362472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.362758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.362768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.362949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.362959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.363259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.363269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.363469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.363478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.363871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.363880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.364187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.364196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.364504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.364514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.364852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.364861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.365179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.365191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.365521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.365531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.365699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.365709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.365892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.365902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.366188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.366198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.366366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.366376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.366525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.366535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.366842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.366852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.367153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.367162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.367365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.367375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.367575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.367584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.367870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.367879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.368051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.368060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.368423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.609 [2024-11-20 14:49:03.368436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.609 qpair failed and we were unable to recover it. 00:28:56.609 [2024-11-20 14:49:03.368749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.368759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.368945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.368955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.369274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.369285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.369608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.369617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.369778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.369787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.370133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.370142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.370325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.370335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.370615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.370624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.370953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.370963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.371126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.371135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.371492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.371502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.371803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.371813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.372194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.372204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.372547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.372559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.372934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.372944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.373116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.373126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.373306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.373316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.373492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.373502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.373722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.373732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.374055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.374065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.374391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.374401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.374747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.374757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.374923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.374932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.375111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.375121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.375302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.375313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.375645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.375655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.376017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.376026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.376096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.376105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.376191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.376201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1509490 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.376677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.376721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.376916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.376931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.377273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.377288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.377638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.377651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.377949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.377962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.378126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.378138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.378429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.378441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.378758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.378771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.379053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.610 [2024-11-20 14:49:03.379065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.610 qpair failed and we were unable to recover it. 00:28:56.610 [2024-11-20 14:49:03.379255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.379268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.379455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.379467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.379791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.379813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.380108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.380120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.380257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.380269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.380616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.380629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.380933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.380943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.381261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.381272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.381568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.381578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.381883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.381892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.382226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.382236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.382580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.382591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.382926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.382936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.383127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.383138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.383355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.383365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.383691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.383700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.384071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.384081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.384373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.384383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.384729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.384739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.385032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.385043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.385222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.385233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.385581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.385592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.385782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.385793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.385980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.385990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.386285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.386296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.386511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.386520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.386702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.386712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.387037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.387047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.387252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.387263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.387596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.387606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.387928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.387938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.388248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.388259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.388556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.388566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.388898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.388907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.389197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.389207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.389587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.389598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.389931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.389941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.390118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.611 [2024-11-20 14:49:03.390128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.611 qpair failed and we were unable to recover it. 00:28:56.611 [2024-11-20 14:49:03.390312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.390323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.390676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.390686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.390992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.391002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.391195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.391205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.391532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.391545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.391844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.391853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.392029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.392039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.392254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.392264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.392582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.392592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.392779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.392791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.392986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.392995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.393182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.393192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.393494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.393504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.393828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.393837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.394161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.394171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.394484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.394495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.394540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.394549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.394735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.394745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.395027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.395037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.395321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.395332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.395644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.395654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.396011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.396020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.396188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.396198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.396483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.396493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.396816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.396826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.396969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.396980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.397172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.397182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.397485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.397495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.397822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.397832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.398170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.398180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.612 qpair failed and we were unable to recover it. 00:28:56.612 [2024-11-20 14:49:03.398231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.612 [2024-11-20 14:49:03.398241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.398398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.398408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.398705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.398715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.398884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.398894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.399079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.399090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.399381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.399392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.399705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.399715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.400025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.400035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.400210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.400220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.400571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.400581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.400883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.400892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.401232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.401242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.401606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.401615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.401768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.401778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.402052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.402065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.402417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.402428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.402750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.402760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.402921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.402930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.403232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.403242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.403442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.403454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.403738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.403748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.404053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.404063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.404225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.404235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.404607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.404618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.404782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.404791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.405073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.405083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.405392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.405403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.405563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.405573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.405932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.405942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.406268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.406278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.406645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.406655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.406980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.406989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.407278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.407288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.407608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.407618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.407930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.407940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.408243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.408256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.408490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.408499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.408837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.613 [2024-11-20 14:49:03.408846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.613 qpair failed and we were unable to recover it. 00:28:56.613 [2024-11-20 14:49:03.409146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.409156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.409449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.409459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.409745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.409754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.409932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.409946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.410268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.410278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.410465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.410475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.410768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.410778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.411068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.411078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.411364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.411374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.411681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.411691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.412001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.412011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.412302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.412312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.412689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.412699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.412739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.412748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.412912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.412921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.413094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.413104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.413409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.413423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.413799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.413809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.414101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.414111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.414425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.414435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.414720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.414730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.414936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.414946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.415262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.415272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.415534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.415543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.415870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.415879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.416207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.416216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.416394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.416405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.416589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.416599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.416899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.416909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.417211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.417221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.417436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.417446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.417622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.417632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.417815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.417825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.418157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.418167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.418460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.418470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.418632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.418641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.418845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.614 [2024-11-20 14:49:03.418854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.614 qpair failed and we were unable to recover it. 00:28:56.614 [2024-11-20 14:49:03.419205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.419215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.419588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.419599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.419910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.419921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.420231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.420242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.420562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.420573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.420735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.420744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.421078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.421089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.421397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.421407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.421753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.421763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.421977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.421986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.422340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.422350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.422560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.422569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.422879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.422889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.423237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.423249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.423557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.423567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.423741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.423751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.423968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.423977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.424281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.424291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.424436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.424447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.424654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.424667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.424962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.424972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.425254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.425265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.425462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.425472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.425661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.425672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.425880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.425890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.426079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.426088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.426269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.426280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.426632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.426641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.426930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.426940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.427135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.427144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.427492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.427502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.427791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.427801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.428090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.428099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.428286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.428296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.428596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.428606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.428885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.428895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.429208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.429217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.429575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.615 [2024-11-20 14:49:03.429585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.615 qpair failed and we were unable to recover it. 00:28:56.615 [2024-11-20 14:49:03.429747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.429756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.429801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.429810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.430091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.430100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.430412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.430423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.430752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.430762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.430951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.430961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.431189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.431199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.431377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.431387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.431565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.431574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.431870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.431880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.432070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.432081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.432368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.432378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.432579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.432589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.432877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.432887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.433202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.433212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.433401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.433411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.433714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.433724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.433889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.433899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.434201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.434210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.434510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.434521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.434679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.434689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.434874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.434885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.435193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.435203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.435514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.435524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.435845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.435855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.436133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.436143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.436448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.436458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.436619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.436629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.436970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.436980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.437143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.437153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.437466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.437475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.437794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.437804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.437989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.437999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.438282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.438292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.438472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.438482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.438716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.438725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.439089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.439099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.616 qpair failed and we were unable to recover it. 00:28:56.616 [2024-11-20 14:49:03.439263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.616 [2024-11-20 14:49:03.439273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.439475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.439485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.439622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.439631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.439868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.439878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.440054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.440064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.440384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.440395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.440563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.440573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.440862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.440872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.441143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.441153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.441477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.441487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.441646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.441655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.441817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.441829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.442025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.442034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.442204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.442214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.442508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.442518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.442798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.442808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.443110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.443119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.443417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.443428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.443744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.443754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.444056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.444066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.444386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.444397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.444673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.444682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.444841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.444851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.445021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.445030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.445189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.445198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.445533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.445543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.445733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.445744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.446048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.446058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.446253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.446264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.446429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.446439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.446743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.446752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.447029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.447039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.617 [2024-11-20 14:49:03.447343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.617 [2024-11-20 14:49:03.447354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.617 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.447720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.447730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.448017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.448026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.448396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.448406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.448594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.448604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.448938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.448948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.449109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.449118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.449286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.449296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.449597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.449606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.449878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.449887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.450051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.450060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.450339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.450348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.450548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.450558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.450942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.450951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.451250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.451260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.451554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.451563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.451835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.451844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.452139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.452149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.452443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.452453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.452758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.452770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.453065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.453074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.453219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.453229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.453515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.453525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.453795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.453805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.454116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.454127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.454400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.454411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.454714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.454724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.454901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.454911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.455201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.455211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.455370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.455381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.455650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.455659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.455840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.455849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.456007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.456016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.456335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.456346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.456684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.456694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.456971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.456981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.457303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.457313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.457619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.618 [2024-11-20 14:49:03.457628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.618 qpair failed and we were unable to recover it. 00:28:56.618 [2024-11-20 14:49:03.457911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.457920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.458120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.458130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.458441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.458451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.458494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.458503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.458809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.458819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.459091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.459101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.459422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.459432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.459621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.459634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.459939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.459949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.460140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.460150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.460459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.460470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.460634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.460644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.460984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.460994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.461319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.461329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.461511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.461521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.461823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.461833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.461876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.461885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.462165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.462175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.462503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.462513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.462697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.462707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.462903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.462913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.463260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.463273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.463611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.463620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.463934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.463943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.464106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.464116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.464377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.464388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.464677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.464687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.465012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.465022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.465218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.465228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.465563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.465573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.465914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.465924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.466268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.466278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.466571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.466581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.466875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.466885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 [2024-11-20 14:49:03.467046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.467057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ca0000b90 with addr=10.0.0.2, port=4420 00:28:56.619 qpair failed and we were unable to recover it. 00:28:56.619 A controller has encountered a failure and is being reset. 00:28:56.619 [2024-11-20 14:49:03.467327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.619 [2024-11-20 14:49:03.467376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1506020 with addr=10.0.0.2, port=4420 00:28:56.619 [2024-11-20 14:49:03.467389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506020 is same with the state(6) to be set 00:28:56.619 [2024-11-20 14:49:03.467406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1506020 (9): Bad file descriptor 00:28:56.619 [2024-11-20 14:49:03.467418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:56.619 [2024-11-20 14:49:03.467426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:56.619 [2024-11-20 14:49:03.467435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:56.619 Unable to reset the controller. 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.620 Malloc0 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.620 [2024-11-20 14:49:03.615007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.620 [2024-11-20 14:49:03.643250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.620 14:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4100086 00:28:57.559 Controller properly reset. 00:29:02.836 Initializing NVMe Controllers 00:29:02.836 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:02.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:02.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:02.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:02.836 Initialization complete. Launching workers. 00:29:02.836 Starting thread on core 1 00:29:02.836 Starting thread on core 2 00:29:02.836 Starting thread on core 3 00:29:02.836 Starting thread on core 0 00:29:02.836 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:02.836 00:29:02.836 real 0m11.290s 00:29:02.836 user 0m35.396s 00:29:02.836 sys 0m5.089s 00:29:02.836 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.836 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.836 ************************************ 00:29:02.836 END TEST nvmf_target_disconnect_tc2 00:29:02.836 ************************************ 00:29:02.836 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:02.836 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:02.836 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:02.836 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.836 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.837 rmmod nvme_tcp 00:29:02.837 rmmod nvme_fabrics 00:29:02.837 rmmod nvme_keyring 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 4100844 ']' 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 4100844 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4100844 ']' 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 4100844 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4100844 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4100844' 00:29:02.837 killing process with pid 4100844 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 4100844 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 4100844 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.837 14:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.745 14:49:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.745 00:29:04.745 real 0m19.264s 00:29:04.745 user 1m2.263s 00:29:04.745 sys 0m9.681s 00:29:04.745 14:49:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.745 14:49:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.745 ************************************ 00:29:04.745 END TEST nvmf_target_disconnect 00:29:04.745 ************************************ 00:29:04.745 14:49:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:04.745 00:29:04.745 real 5m27.517s 00:29:04.745 user 10m23.889s 00:29:04.745 sys 1m41.868s 00:29:04.745 14:49:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.745 14:49:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.745 ************************************ 00:29:04.745 END TEST nvmf_host 00:29:04.745 ************************************ 00:29:04.745 14:49:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:04.745 14:49:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:04.745 14:49:11 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:04.745 14:49:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:04.745 14:49:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.745 14:49:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:04.745 ************************************ 00:29:04.745 START TEST nvmf_target_core_interrupt_mode 00:29:04.745 ************************************ 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:04.746 * Looking for test storage... 00:29:04.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:04.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.746 --rc genhtml_branch_coverage=1 00:29:04.746 --rc genhtml_function_coverage=1 00:29:04.746 --rc genhtml_legend=1 00:29:04.746 --rc geninfo_all_blocks=1 00:29:04.746 --rc geninfo_unexecuted_blocks=1 00:29:04.746 00:29:04.746 ' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:04.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.746 --rc genhtml_branch_coverage=1 00:29:04.746 --rc genhtml_function_coverage=1 00:29:04.746 --rc genhtml_legend=1 00:29:04.746 --rc geninfo_all_blocks=1 00:29:04.746 --rc geninfo_unexecuted_blocks=1 00:29:04.746 00:29:04.746 ' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:04.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.746 --rc genhtml_branch_coverage=1 00:29:04.746 --rc genhtml_function_coverage=1 00:29:04.746 --rc genhtml_legend=1 00:29:04.746 --rc geninfo_all_blocks=1 00:29:04.746 --rc geninfo_unexecuted_blocks=1 00:29:04.746 00:29:04.746 ' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:04.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.746 --rc genhtml_branch_coverage=1 00:29:04.746 --rc genhtml_function_coverage=1 00:29:04.746 --rc genhtml_legend=1 00:29:04.746 --rc geninfo_all_blocks=1 00:29:04.746 --rc geninfo_unexecuted_blocks=1 00:29:04.746 00:29:04.746 ' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.746 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:04.747 ************************************ 00:29:04.747 START TEST nvmf_abort 00:29:04.747 ************************************ 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:04.747 * Looking for test storage... 00:29:04.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:04.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.747 --rc genhtml_branch_coverage=1 00:29:04.747 --rc genhtml_function_coverage=1 00:29:04.747 --rc genhtml_legend=1 00:29:04.747 --rc geninfo_all_blocks=1 00:29:04.747 --rc geninfo_unexecuted_blocks=1 00:29:04.747 00:29:04.747 ' 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:04.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.747 --rc genhtml_branch_coverage=1 00:29:04.747 --rc genhtml_function_coverage=1 00:29:04.747 --rc genhtml_legend=1 00:29:04.747 --rc geninfo_all_blocks=1 00:29:04.747 --rc geninfo_unexecuted_blocks=1 00:29:04.747 00:29:04.747 ' 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:04.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.747 --rc genhtml_branch_coverage=1 00:29:04.747 --rc genhtml_function_coverage=1 00:29:04.747 --rc genhtml_legend=1 00:29:04.747 --rc geninfo_all_blocks=1 00:29:04.747 --rc geninfo_unexecuted_blocks=1 00:29:04.747 00:29:04.747 ' 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:04.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.747 --rc genhtml_branch_coverage=1 00:29:04.747 --rc genhtml_function_coverage=1 00:29:04.747 --rc genhtml_legend=1 00:29:04.747 --rc geninfo_all_blocks=1 00:29:04.747 --rc geninfo_unexecuted_blocks=1 00:29:04.747 00:29:04.747 ' 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.747 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.748 14:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.025 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:10.026 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:10.026 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:10.026 Found net devices under 0000:31:00.0: cvl_0_0 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:10.026 Found net devices under 0000:31:00.1: cvl_0_1 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.026 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:29:10.026 00:29:10.026 --- 10.0.0.2 ping statistics --- 00:29:10.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.027 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:29:10.027 00:29:10.027 --- 10.0.0.1 ping statistics --- 00:29:10.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.027 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:10.027 14:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4106762 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4106762 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4106762 ']' 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.027 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:10.027 [2024-11-20 14:49:17.068774] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:10.027 [2024-11-20 14:49:17.069923] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:29:10.027 [2024-11-20 14:49:17.069981] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.287 [2024-11-20 14:49:17.160847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:10.288 [2024-11-20 14:49:17.211790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.288 [2024-11-20 14:49:17.211843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.288 [2024-11-20 14:49:17.211851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.288 [2024-11-20 14:49:17.211858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.288 [2024-11-20 14:49:17.211864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.288 [2024-11-20 14:49:17.213684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.288 [2024-11-20 14:49:17.213852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.288 [2024-11-20 14:49:17.213852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.288 [2024-11-20 14:49:17.291170] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:10.288 [2024-11-20 14:49:17.292064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:10.288 [2024-11-20 14:49:17.292579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:10.288 [2024-11-20 14:49:17.292713] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:10.857 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.857 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:10.857 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.857 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.857 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.858 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.858 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:10.858 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.858 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.858 [2024-11-20 14:49:17.902734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.858 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.858 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:10.858 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.858 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.117 Malloc0 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.117 Delay0 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.117 [2024-11-20 14:49:17.966530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.117 14:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:11.117 [2024-11-20 14:49:18.027899] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:13.657 Initializing NVMe Controllers 00:29:13.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:13.657 controller IO queue size 128 less than required 00:29:13.657 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:13.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:13.657 Initialization complete. Launching workers. 00:29:13.657 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28889 00:29:13.657 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28950, failed to submit 66 00:29:13.657 success 28889, unsuccessful 61, failed 0 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.658 rmmod nvme_tcp 00:29:13.658 rmmod nvme_fabrics 00:29:13.658 rmmod nvme_keyring 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4106762 ']' 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4106762 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4106762 ']' 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4106762 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4106762 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4106762' 00:29:13.658 killing process with pid 4106762 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4106762 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4106762 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.658 14:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.562 00:29:15.562 real 0m10.864s 00:29:15.562 user 0m9.983s 00:29:15.562 sys 0m5.272s 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.562 ************************************ 00:29:15.562 END TEST nvmf_abort 00:29:15.562 ************************************ 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:15.562 ************************************ 00:29:15.562 START TEST nvmf_ns_hotplug_stress 00:29:15.562 ************************************ 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:15.562 * Looking for test storage... 00:29:15.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:29:15.562 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.824 --rc genhtml_branch_coverage=1 00:29:15.824 --rc genhtml_function_coverage=1 00:29:15.824 --rc genhtml_legend=1 00:29:15.824 --rc geninfo_all_blocks=1 00:29:15.824 --rc geninfo_unexecuted_blocks=1 00:29:15.824 00:29:15.824 ' 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.824 --rc genhtml_branch_coverage=1 00:29:15.824 --rc genhtml_function_coverage=1 00:29:15.824 --rc genhtml_legend=1 00:29:15.824 --rc geninfo_all_blocks=1 00:29:15.824 --rc geninfo_unexecuted_blocks=1 00:29:15.824 00:29:15.824 ' 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.824 --rc genhtml_branch_coverage=1 00:29:15.824 --rc genhtml_function_coverage=1 00:29:15.824 --rc genhtml_legend=1 00:29:15.824 --rc geninfo_all_blocks=1 00:29:15.824 --rc geninfo_unexecuted_blocks=1 00:29:15.824 00:29:15.824 ' 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.824 --rc genhtml_branch_coverage=1 00:29:15.824 --rc genhtml_function_coverage=1 00:29:15.824 --rc genhtml_legend=1 00:29:15.824 --rc geninfo_all_blocks=1 00:29:15.824 --rc geninfo_unexecuted_blocks=1 00:29:15.824 00:29:15.824 ' 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.824 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.825 14:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.106 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:21.107 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:21.107 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:21.107 Found net devices under 0000:31:00.0: cvl_0_0 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:21.107 Found net devices under 0000:31:00.1: cvl_0_1 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.107 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.108 14:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:29:21.108 00:29:21.108 --- 10.0.0.2 ping statistics --- 00:29:21.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.108 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:29:21.108 00:29:21.108 --- 10.0.0.1 ping statistics --- 00:29:21.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.108 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4111883 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4111883 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4111883 ']' 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.108 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:21.367 [2024-11-20 14:49:28.194090] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:21.367 [2024-11-20 14:49:28.195195] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:29:21.367 [2024-11-20 14:49:28.195242] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.367 [2024-11-20 14:49:28.285139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:21.367 [2024-11-20 14:49:28.321963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.367 [2024-11-20 14:49:28.321996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.367 [2024-11-20 14:49:28.322004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.368 [2024-11-20 14:49:28.322011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.368 [2024-11-20 14:49:28.322017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.368 [2024-11-20 14:49:28.323522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.368 [2024-11-20 14:49:28.323664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.368 [2024-11-20 14:49:28.323666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.368 [2024-11-20 14:49:28.380709] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:21.368 [2024-11-20 14:49:28.381687] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:21.368 [2024-11-20 14:49:28.382030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:21.368 [2024-11-20 14:49:28.382055] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:21.936 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.936 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:21.936 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.936 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.936 14:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:22.196 14:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.196 14:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:22.196 14:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:22.196 [2024-11-20 14:49:29.144432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.196 14:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:22.455 14:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.455 [2024-11-20 14:49:29.465173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.455 14:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:22.714 14:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:22.974 Malloc0 00:29:22.974 14:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:22.974 Delay0 00:29:22.974 14:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.234 14:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:23.495 NULL1 00:29:23.495 14:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:23.495 14:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4112257 00:29:23.495 14:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:23.495 14:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.495 14:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:24.876 Read completed with error (sct=0, sc=11) 00:29:24.876 14:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.876 14:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:24.876 14:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:25.136 true 00:29:25.136 14:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:25.136 14:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.074 14:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.074 14:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:26.074 14:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:26.333 true 00:29:26.333 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:26.333 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.333 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.644 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:26.644 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:26.644 true 00:29:26.644 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:26.644 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.931 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.931 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:26.931 14:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:27.191 true 00:29:27.191 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:27.191 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.450 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.450 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:27.451 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:27.712 true 00:29:27.712 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:27.712 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.712 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.971 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:27.971 14:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:28.232 true 00:29:28.232 14:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:28.232 14:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.170 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.170 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:29.170 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:29.430 true 00:29:29.430 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:29.430 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.691 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.691 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:29.691 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:29.950 true 00:29:29.950 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:29.950 14:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.950 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.209 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:30.209 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:30.469 true 00:29:30.469 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:30.469 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.469 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.728 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:30.728 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:30.987 true 00:29:30.987 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:30.987 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.987 14:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.246 14:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:31.246 14:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:31.246 true 00:29:31.246 14:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:31.246 14:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.184 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.443 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:32.443 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:32.443 true 00:29:32.443 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:32.443 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.702 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.962 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:32.962 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:32.962 true 00:29:32.962 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:32.962 14:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.222 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.482 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:33.482 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:33.482 true 00:29:33.482 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:33.482 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.742 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.742 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:33.742 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:34.001 true 00:29:34.001 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:34.001 14:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.261 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.261 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:34.261 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:34.521 true 00:29:34.521 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:34.521 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.521 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.781 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:34.781 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:35.042 true 00:29:35.042 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:35.042 14:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.042 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.302 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:35.302 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:35.302 true 00:29:35.302 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:35.302 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.561 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.820 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:35.820 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:35.820 true 00:29:35.820 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:35.820 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.080 14:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.080 14:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:36.080 14:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:36.339 true 00:29:36.339 14:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:36.339 14:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:37.279 14:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:37.540 14:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:37.540 14:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:37.540 true 00:29:37.540 14:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:37.540 14:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.800 14:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.060 14:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:38.060 14:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:38.060 true 00:29:38.060 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:38.060 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.319 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.319 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:38.319 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:38.579 true 00:29:38.579 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:38.579 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.840 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.840 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:38.840 14:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:39.100 true 00:29:39.100 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:39.100 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.361 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.361 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:39.361 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:39.621 true 00:29:39.621 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:39.621 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.621 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.882 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:39.882 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:40.142 true 00:29:40.142 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:40.142 14:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.142 14:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.401 14:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:40.401 14:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:40.401 true 00:29:40.401 14:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:40.401 14:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.340 14:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.599 14:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:41.599 14:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:41.599 true 00:29:41.858 14:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:41.858 14:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.858 14:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.118 14:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:42.118 14:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:42.118 true 00:29:42.118 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:42.118 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.376 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.635 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:42.635 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:42.635 true 00:29:42.635 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:42.635 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.896 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.896 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:29:42.896 14:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:29:43.155 true 00:29:43.155 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:43.155 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.416 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.416 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:29:43.416 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:29:43.675 true 00:29:43.675 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:43.675 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.675 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.935 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:29:43.935 14:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:29:44.197 true 00:29:44.197 14:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:44.197 14:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.197 14:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.457 14:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:29:44.457 14:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:29:44.457 true 00:29:44.717 14:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:44.718 14:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.657 14:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.657 14:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:29:45.657 14:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:29:45.917 true 00:29:45.917 14:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:45.917 14:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.917 14:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.177 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:29:46.177 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:29:46.177 true 00:29:46.177 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:46.177 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.437 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.696 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:29:46.696 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:29:46.696 true 00:29:46.696 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:46.696 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.956 14:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.217 14:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:29:47.217 14:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:29:47.217 true 00:29:47.217 14:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:47.217 14:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.477 14:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.477 14:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:29:47.477 14:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:29:47.736 true 00:29:47.736 14:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:47.736 14:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.674 14:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.934 14:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:29:48.934 14:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:29:48.934 true 00:29:48.934 14:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:48.934 14:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.194 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.194 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:29:49.195 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:29:49.454 true 00:29:49.454 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:49.454 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.713 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.713 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:29:49.713 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:29:49.973 true 00:29:49.973 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:49.973 14:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.973 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.232 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:29:50.232 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:29:50.491 true 00:29:50.491 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:50.491 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.491 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.750 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:29:50.750 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:29:50.750 true 00:29:51.009 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:51.009 14:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.950 14:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.950 14:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:29:51.950 14:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:29:52.210 true 00:29:52.210 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:52.210 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.210 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.470 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:29:52.470 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:29:52.470 true 00:29:52.470 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:52.470 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.728 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.988 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:29:52.988 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:29:52.988 true 00:29:52.988 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:52.988 14:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.248 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.248 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:29:53.248 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:29:53.508 true 00:29:53.508 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:53.508 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.768 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.768 Initializing NVMe Controllers 00:29:53.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.768 Controller IO queue size 128, less than required. 00:29:53.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.768 Controller IO queue size 128, less than required. 00:29:53.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:53.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:53.768 Initialization complete. Launching workers. 00:29:53.768 ======================================================== 00:29:53.768 Latency(us) 00:29:53.768 Device Information : IOPS MiB/s Average min max 00:29:53.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 390.17 0.19 106756.57 1872.34 1013003.27 00:29:53.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10081.90 4.92 12695.54 1609.01 345339.55 00:29:53.768 ======================================================== 00:29:53.768 Total : 10472.07 5.11 16200.09 1609.01 1013003.27 00:29:53.768 00:29:53.768 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:29:53.768 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:29:54.028 true 00:29:54.028 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4112257 00:29:54.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4112257) - No such process 00:29:54.028 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4112257 00:29:54.028 14:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.288 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:54.288 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:54.288 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:54.288 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:54.288 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:54.288 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:54.548 null0 00:29:54.548 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:54.548 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:54.548 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:54.548 null1 00:29:54.548 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:54.548 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:54.548 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:54.808 null2 00:29:54.808 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:54.808 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:54.808 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:55.067 null3 00:29:55.067 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:55.067 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:55.067 14:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:55.067 null4 00:29:55.067 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:55.067 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:55.067 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:55.325 null5 00:29:55.325 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:55.325 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:55.325 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:55.325 null6 00:29:55.325 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:55.325 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:55.325 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:55.586 null7 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:55.586 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4119363 4119364 4119365 4119367 4119370 4119371 4119372 4119376 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:55.587 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.846 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.847 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:56.107 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.107 14:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.107 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.366 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:56.625 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.626 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.626 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.626 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:56.626 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:56.626 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:56.626 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.626 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.626 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:56.626 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:56.884 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.885 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:57.145 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.145 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.145 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:57.145 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.145 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.145 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:57.145 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:57.145 14:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.145 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:57.405 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:57.665 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:57.666 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:57.666 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:57.666 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.666 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.666 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:57.925 14:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:58.186 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.445 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:58.445 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.445 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.445 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:58.445 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:58.445 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:58.445 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:58.445 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.446 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.446 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:58.446 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:58.446 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:58.446 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.446 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.446 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:58.705 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:58.706 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.966 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.967 14:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.967 rmmod nvme_tcp 00:29:58.967 rmmod nvme_fabrics 00:29:59.226 rmmod nvme_keyring 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4111883 ']' 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4111883 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4111883 ']' 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4111883 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4111883 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4111883' 00:29:59.226 killing process with pid 4111883 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4111883 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4111883 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.226 14:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.759 00:30:01.759 real 0m45.721s 00:30:01.759 user 2m55.183s 00:30:01.759 sys 0m17.567s 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:01.759 ************************************ 00:30:01.759 END TEST nvmf_ns_hotplug_stress 00:30:01.759 ************************************ 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:01.759 ************************************ 00:30:01.759 START TEST nvmf_delete_subsystem 00:30:01.759 ************************************ 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:01.759 * Looking for test storage... 00:30:01.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:01.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.759 --rc genhtml_branch_coverage=1 00:30:01.759 --rc genhtml_function_coverage=1 00:30:01.759 --rc genhtml_legend=1 00:30:01.759 --rc geninfo_all_blocks=1 00:30:01.759 --rc geninfo_unexecuted_blocks=1 00:30:01.759 00:30:01.759 ' 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:01.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.759 --rc genhtml_branch_coverage=1 00:30:01.759 --rc genhtml_function_coverage=1 00:30:01.759 --rc genhtml_legend=1 00:30:01.759 --rc geninfo_all_blocks=1 00:30:01.759 --rc geninfo_unexecuted_blocks=1 00:30:01.759 00:30:01.759 ' 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:01.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.759 --rc genhtml_branch_coverage=1 00:30:01.759 --rc genhtml_function_coverage=1 00:30:01.759 --rc genhtml_legend=1 00:30:01.759 --rc geninfo_all_blocks=1 00:30:01.759 --rc geninfo_unexecuted_blocks=1 00:30:01.759 00:30:01.759 ' 00:30:01.759 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:01.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.759 --rc genhtml_branch_coverage=1 00:30:01.759 --rc genhtml_function_coverage=1 00:30:01.759 --rc genhtml_legend=1 00:30:01.759 --rc geninfo_all_blocks=1 00:30:01.760 --rc geninfo_unexecuted_blocks=1 00:30:01.760 00:30:01.760 ' 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.760 14:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:07.039 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:07.039 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:07.039 Found net devices under 0000:31:00.0: cvl_0_0 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:07.039 Found net devices under 0000:31:00.1: cvl_0_1 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.039 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:07.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:30:07.040 00:30:07.040 --- 10.0.0.2 ping statistics --- 00:30:07.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.040 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:30:07.040 00:30:07.040 --- 10.0.0.1 ping statistics --- 00:30:07.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.040 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:07.040 14:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4124608 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4124608 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4124608 ']' 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:07.040 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:07.040 [2024-11-20 14:50:14.069890] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:07.040 [2024-11-20 14:50:14.071021] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:30:07.040 [2024-11-20 14:50:14.071074] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.299 [2024-11-20 14:50:14.162523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:07.299 [2024-11-20 14:50:14.212367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.299 [2024-11-20 14:50:14.212414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.299 [2024-11-20 14:50:14.212423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.299 [2024-11-20 14:50:14.212430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.299 [2024-11-20 14:50:14.212436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.299 [2024-11-20 14:50:14.213893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.299 [2024-11-20 14:50:14.213899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.299 [2024-11-20 14:50:14.284673] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:07.299 [2024-11-20 14:50:14.284790] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:07.299 [2024-11-20 14:50:14.284927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:07.868 [2024-11-20 14:50:14.894818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:07.868 [2024-11-20 14:50:14.915163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:07.868 NULL1 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.868 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:08.128 Delay0 00:30:08.128 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.128 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.128 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.128 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:08.129 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.129 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4124875 00:30:08.129 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:08.129 14:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:08.129 [2024-11-20 14:50:14.995858] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:10.036 14:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.036 14:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.036 14:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 [2024-11-20 14:50:17.132680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7924000c40 is same with the state(6) to be set 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 Write completed with error (sct=0, sc=8) 00:30:10.296 Read completed with error (sct=0, sc=8) 00:30:10.296 starting I/O failed: -6 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 starting I/O failed: -6 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 starting I/O failed: -6 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 starting I/O failed: -6 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 starting I/O failed: -6 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 [2024-11-20 14:50:17.133588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d964a0 is same with the state(6) to be set 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 [2024-11-20 14:50:17.133774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f792400d490 is same with the state(6) to be set 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Write completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 Read completed with error (sct=0, sc=8) 00:30:10.297 [2024-11-20 14:50:17.133924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d960e0 is same with the state(6) to be set 00:30:11.237 [2024-11-20 14:50:18.095840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d975e0 is same with the state(6) to be set 00:30:11.237 Read completed with error (sct=0, sc=8) 00:30:11.237 Read completed with error (sct=0, sc=8) 00:30:11.237 Write completed with error (sct=0, sc=8) 00:30:11.237 Write completed with error (sct=0, sc=8) 00:30:11.237 Write completed with error (sct=0, sc=8) 00:30:11.237 Read completed with error (sct=0, sc=8) 00:30:11.237 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 [2024-11-20 14:50:18.132888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95f00 is same with the state(6) to be set 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 [2024-11-20 14:50:18.133187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d962c0 is same with the state(6) to be set 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 [2024-11-20 14:50:18.133458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f792400d7c0 is same with the state(6) to be set 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 Write completed with error (sct=0, sc=8) 00:30:11.238 Read completed with error (sct=0, sc=8) 00:30:11.238 [2024-11-20 14:50:18.133551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f792400d020 is same with the state(6) to be set 00:30:11.238 Initializing NVMe Controllers 00:30:11.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.238 Controller IO queue size 128, less than required. 00:30:11.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:11.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:11.238 Initialization complete. Launching workers. 00:30:11.238 ======================================================== 00:30:11.238 Latency(us) 00:30:11.238 Device Information : IOPS MiB/s Average min max 00:30:11.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 142.45 0.07 985119.01 344.30 2000367.63 00:30:11.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.80 0.08 944565.17 1101.28 2001607.08 00:30:11.238 ======================================================== 00:30:11.238 Total : 306.25 0.15 963428.95 344.30 2001607.08 00:30:11.238 00:30:11.238 [2024-11-20 14:50:18.134036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d975e0 (9): Bad file descriptor 00:30:11.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:11.238 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.238 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:11.238 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4124875 00:30:11.238 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4124875 00:30:11.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4124875) - No such process 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4124875 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4124875 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4124875 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:11.806 [2024-11-20 14:50:18.655066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4125622 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4125622 00:30:11.806 14:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:11.806 [2024-11-20 14:50:18.705223] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:12.374 14:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:12.374 14:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4125622 00:30:12.374 14:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:12.633 14:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:12.633 14:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4125622 00:30:12.633 14:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:13.202 14:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:13.202 14:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4125622 00:30:13.202 14:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:13.880 14:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:13.880 14:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4125622 00:30:13.880 14:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:14.181 14:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:14.181 14:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4125622 00:30:14.181 14:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:14.751 14:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:14.751 14:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4125622 00:30:14.751 14:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:15.012 Initializing NVMe Controllers 00:30:15.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.012 Controller IO queue size 128, less than required. 00:30:15.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:15.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:15.012 Initialization complete. Launching workers. 00:30:15.012 ======================================================== 00:30:15.012 Latency(us) 00:30:15.012 Device Information : IOPS MiB/s Average min max 00:30:15.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004075.15 1000139.63 1042666.01 00:30:15.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003056.11 1000213.88 1008131.11 00:30:15.012 ======================================================== 00:30:15.012 Total : 256.00 0.12 1003565.63 1000139.63 1042666.01 00:30:15.012 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4125622 00:30:15.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4125622) - No such process 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4125622 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:15.272 rmmod nvme_tcp 00:30:15.272 rmmod nvme_fabrics 00:30:15.272 rmmod nvme_keyring 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4124608 ']' 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4124608 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4124608 ']' 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4124608 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4124608 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4124608' 00:30:15.272 killing process with pid 4124608 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4124608 00:30:15.272 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4124608 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.532 14:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.439 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.439 00:30:17.439 real 0m16.142s 00:30:17.439 user 0m25.571s 00:30:17.439 sys 0m6.001s 00:30:17.439 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.439 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:17.439 ************************************ 00:30:17.439 END TEST nvmf_delete_subsystem 00:30:17.439 ************************************ 00:30:17.439 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:17.439 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:17.439 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.439 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:17.439 ************************************ 00:30:17.439 START TEST nvmf_host_management 00:30:17.439 ************************************ 00:30:17.439 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:17.700 * Looking for test storage... 00:30:17.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:17.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.700 --rc genhtml_branch_coverage=1 00:30:17.700 --rc genhtml_function_coverage=1 00:30:17.700 --rc genhtml_legend=1 00:30:17.700 --rc geninfo_all_blocks=1 00:30:17.700 --rc geninfo_unexecuted_blocks=1 00:30:17.700 00:30:17.700 ' 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:17.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.700 --rc genhtml_branch_coverage=1 00:30:17.700 --rc genhtml_function_coverage=1 00:30:17.700 --rc genhtml_legend=1 00:30:17.700 --rc geninfo_all_blocks=1 00:30:17.700 --rc geninfo_unexecuted_blocks=1 00:30:17.700 00:30:17.700 ' 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:17.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.700 --rc genhtml_branch_coverage=1 00:30:17.700 --rc genhtml_function_coverage=1 00:30:17.700 --rc genhtml_legend=1 00:30:17.700 --rc geninfo_all_blocks=1 00:30:17.700 --rc geninfo_unexecuted_blocks=1 00:30:17.700 00:30:17.700 ' 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:17.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.700 --rc genhtml_branch_coverage=1 00:30:17.700 --rc genhtml_function_coverage=1 00:30:17.700 --rc genhtml_legend=1 00:30:17.700 --rc geninfo_all_blocks=1 00:30:17.700 --rc geninfo_unexecuted_blocks=1 00:30:17.700 00:30:17.700 ' 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:17.700 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.701 14:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:22.979 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:22.980 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:22.980 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:22.980 Found net devices under 0000:31:00.0: cvl_0_0 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:22.980 Found net devices under 0000:31:00.1: cvl_0_1 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:22.980 14:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:22.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:30:22.980 00:30:22.980 --- 10.0.0.2 ping statistics --- 00:30:22.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.980 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.460 ms 00:30:22.980 00:30:22.980 --- 10.0.0.1 ping statistics --- 00:30:22.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.980 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:22.980 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4130887 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4130887 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4130887 ']' 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:23.241 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:23.241 [2024-11-20 14:50:30.100431] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:23.241 [2024-11-20 14:50:30.101598] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:30:23.241 [2024-11-20 14:50:30.101650] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.241 [2024-11-20 14:50:30.197873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:23.241 [2024-11-20 14:50:30.252448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.241 [2024-11-20 14:50:30.252505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.241 [2024-11-20 14:50:30.252514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.241 [2024-11-20 14:50:30.252521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.241 [2024-11-20 14:50:30.252527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.241 [2024-11-20 14:50:30.254682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.241 [2024-11-20 14:50:30.254842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:23.241 [2024-11-20 14:50:30.255009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.241 [2024-11-20 14:50:30.255010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:23.501 [2024-11-20 14:50:30.333818] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:23.501 [2024-11-20 14:50:30.334455] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:23.501 [2024-11-20 14:50:30.334736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:23.501 [2024-11-20 14:50:30.334847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:23.501 [2024-11-20 14:50:30.334853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.071 [2024-11-20 14:50:30.943893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.071 14:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.071 Malloc0 00:30:24.071 [2024-11-20 14:50:31.015842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4131012 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4131012 /var/tmp/bdevperf.sock 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4131012 ']' 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:24.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:24.071 { 00:30:24.071 "params": { 00:30:24.071 "name": "Nvme$subsystem", 00:30:24.071 "trtype": "$TEST_TRANSPORT", 00:30:24.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.071 "adrfam": "ipv4", 00:30:24.071 "trsvcid": "$NVMF_PORT", 00:30:24.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.071 "hdgst": ${hdgst:-false}, 00:30:24.071 "ddgst": ${ddgst:-false} 00:30:24.071 }, 00:30:24.071 "method": "bdev_nvme_attach_controller" 00:30:24.071 } 00:30:24.071 EOF 00:30:24.071 )") 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:24.071 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:24.071 "params": { 00:30:24.071 "name": "Nvme0", 00:30:24.071 "trtype": "tcp", 00:30:24.071 "traddr": "10.0.0.2", 00:30:24.071 "adrfam": "ipv4", 00:30:24.071 "trsvcid": "4420", 00:30:24.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:24.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:24.071 "hdgst": false, 00:30:24.071 "ddgst": false 00:30:24.071 }, 00:30:24.071 "method": "bdev_nvme_attach_controller" 00:30:24.071 }' 00:30:24.071 [2024-11-20 14:50:31.087137] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:30:24.071 [2024-11-20 14:50:31.087183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4131012 ] 00:30:24.332 [2024-11-20 14:50:31.159096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.332 [2024-11-20 14:50:31.195231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.332 Running I/O for 10 seconds... 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.332 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.591 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.591 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:30:24.591 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:30:24.591 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.853 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.853 [2024-11-20 14:50:31.699531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35800 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.699869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:24.853 [2024-11-20 14:50:31.699904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.699914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:24.853 [2024-11-20 14:50:31.699922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.699930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:24.853 [2024-11-20 14:50:31.699938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.699946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:24.853 [2024-11-20 14:50:31.699953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.699960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78b00 is same with the state(6) to be set 00:30:24.853 [2024-11-20 14:50:31.700847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.700862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.700877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.700884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.700899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.700906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.700916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.700923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.700933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.700940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.700950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.700957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.700966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.700974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.700983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.700991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.701000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.701007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.701017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.701024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.701034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.701041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.701051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.701058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.701067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.701075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.701084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.853 [2024-11-20 14:50:31.701091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.853 [2024-11-20 14:50:31.701100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.854 [2024-11-20 14:50:31.701774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.854 [2024-11-20 14:50:31.701782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.701946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.855 [2024-11-20 14:50:31.701954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.703171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:24.855 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.855 task offset: 65536 on job bdev=Nvme0n1 fails 00:30:24.855 00:30:24.855 Latency(us) 00:30:24.855 [2024-11-20T13:50:31.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.855 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:24.855 Job: Nvme0n1 ended in about 0.36 seconds with error 00:30:24.855 Verification LBA range: start 0x0 length 0x400 00:30:24.855 Nvme0n1 : 0.36 1403.72 87.73 175.47 0.00 39193.95 1570.13 36044.80 00:30:24.855 [2024-11-20T13:50:31.915Z] =================================================================================================================== 00:30:24.855 [2024-11-20T13:50:31.915Z] Total : 1403.72 87.73 175.47 0.00 39193.95 1570.13 36044.80 00:30:24.855 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:24.855 [2024-11-20 14:50:31.705168] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:24.855 [2024-11-20 14:50:31.705191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78b00 (9): Bad file descriptor 00:30:24.855 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.855 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:24.855 [2024-11-20 14:50:31.706373] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:24.855 [2024-11-20 14:50:31.706441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:24.855 [2024-11-20 14:50:31.706461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.855 [2024-11-20 14:50:31.706473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:24.855 [2024-11-20 14:50:31.706481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:24.855 [2024-11-20 14:50:31.706488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.855 [2024-11-20 14:50:31.706495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a78b00 00:30:24.855 [2024-11-20 14:50:31.706514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78b00 (9): Bad file descriptor 00:30:24.855 [2024-11-20 14:50:31.706526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:24.855 [2024-11-20 14:50:31.706533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:24.855 [2024-11-20 14:50:31.706542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:24.855 [2024-11-20 14:50:31.706551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:24.855 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.855 14:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4131012 00:30:25.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4131012) - No such process 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:25.794 { 00:30:25.794 "params": { 00:30:25.794 "name": "Nvme$subsystem", 00:30:25.794 "trtype": "$TEST_TRANSPORT", 00:30:25.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.794 "adrfam": "ipv4", 00:30:25.794 "trsvcid": "$NVMF_PORT", 00:30:25.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.794 "hdgst": ${hdgst:-false}, 00:30:25.794 "ddgst": ${ddgst:-false} 00:30:25.794 }, 00:30:25.794 "method": "bdev_nvme_attach_controller" 00:30:25.794 } 00:30:25.794 EOF 00:30:25.794 )") 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:25.794 14:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:25.794 "params": { 00:30:25.794 "name": "Nvme0", 00:30:25.794 "trtype": "tcp", 00:30:25.794 "traddr": "10.0.0.2", 00:30:25.794 "adrfam": "ipv4", 00:30:25.794 "trsvcid": "4420", 00:30:25.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:25.794 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:25.794 "hdgst": false, 00:30:25.794 "ddgst": false 00:30:25.794 }, 00:30:25.794 "method": "bdev_nvme_attach_controller" 00:30:25.794 }' 00:30:25.794 [2024-11-20 14:50:32.749814] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:30:25.794 [2024-11-20 14:50:32.749869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4131436 ] 00:30:25.794 [2024-11-20 14:50:32.828125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.054 [2024-11-20 14:50:32.862653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.054 Running I/O for 1 seconds... 00:30:27.250 1542.00 IOPS, 96.38 MiB/s 00:30:27.250 Latency(us) 00:30:27.250 [2024-11-20T13:50:34.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.250 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:27.250 Verification LBA range: start 0x0 length 0x400 00:30:27.250 Nvme0n1 : 1.01 1586.04 99.13 0.00 0.00 39617.62 2334.72 36044.80 00:30:27.250 [2024-11-20T13:50:34.310Z] =================================================================================================================== 00:30:27.250 [2024-11-20T13:50:34.310Z] Total : 1586.04 99.13 0.00 0.00 39617.62 2334.72 36044.80 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.250 rmmod nvme_tcp 00:30:27.250 rmmod nvme_fabrics 00:30:27.250 rmmod nvme_keyring 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4130887 ']' 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4130887 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4130887 ']' 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4130887 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4130887 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4130887' 00:30:27.250 killing process with pid 4130887 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4130887 00:30:27.250 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4130887 00:30:27.510 [2024-11-20 14:50:34.392640] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.510 14:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.416 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:29.416 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:29.416 00:30:29.416 real 0m11.972s 00:30:29.416 user 0m16.537s 00:30:29.416 sys 0m5.531s 00:30:29.416 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.416 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.416 ************************************ 00:30:29.416 END TEST nvmf_host_management 00:30:29.416 ************************************ 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:29.676 ************************************ 00:30:29.676 START TEST nvmf_lvol 00:30:29.676 ************************************ 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:29.676 * Looking for test storage... 00:30:29.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.676 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:29.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.676 --rc genhtml_branch_coverage=1 00:30:29.677 --rc genhtml_function_coverage=1 00:30:29.677 --rc genhtml_legend=1 00:30:29.677 --rc geninfo_all_blocks=1 00:30:29.677 --rc geninfo_unexecuted_blocks=1 00:30:29.677 00:30:29.677 ' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:29.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.677 --rc genhtml_branch_coverage=1 00:30:29.677 --rc genhtml_function_coverage=1 00:30:29.677 --rc genhtml_legend=1 00:30:29.677 --rc geninfo_all_blocks=1 00:30:29.677 --rc geninfo_unexecuted_blocks=1 00:30:29.677 00:30:29.677 ' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:29.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.677 --rc genhtml_branch_coverage=1 00:30:29.677 --rc genhtml_function_coverage=1 00:30:29.677 --rc genhtml_legend=1 00:30:29.677 --rc geninfo_all_blocks=1 00:30:29.677 --rc geninfo_unexecuted_blocks=1 00:30:29.677 00:30:29.677 ' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:29.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.677 --rc genhtml_branch_coverage=1 00:30:29.677 --rc genhtml_function_coverage=1 00:30:29.677 --rc genhtml_legend=1 00:30:29.677 --rc geninfo_all_blocks=1 00:30:29.677 --rc geninfo_unexecuted_blocks=1 00:30:29.677 00:30:29.677 ' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:29.677 14:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:34.952 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:34.952 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:34.952 Found net devices under 0000:31:00.0: cvl_0_0 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:34.952 Found net devices under 0000:31:00.1: cvl_0_1 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.952 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.953 14:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:30:35.213 00:30:35.213 --- 10.0.0.2 ping statistics --- 00:30:35.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.213 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:30:35.213 00:30:35.213 --- 10.0.0.1 ping statistics --- 00:30:35.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.213 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4136083 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4136083 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4136083 ']' 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:35.213 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:35.213 [2024-11-20 14:50:42.115426] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:35.213 [2024-11-20 14:50:42.116406] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:30:35.213 [2024-11-20 14:50:42.116444] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.213 [2024-11-20 14:50:42.205705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:35.213 [2024-11-20 14:50:42.242516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.213 [2024-11-20 14:50:42.242550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.213 [2024-11-20 14:50:42.242558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.213 [2024-11-20 14:50:42.242565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.213 [2024-11-20 14:50:42.242571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.213 [2024-11-20 14:50:42.243968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.213 [2024-11-20 14:50:42.244001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.213 [2024-11-20 14:50:42.244004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.473 [2024-11-20 14:50:42.300687] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:35.473 [2024-11-20 14:50:42.301123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:35.473 [2024-11-20 14:50:42.301216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:35.473 [2024-11-20 14:50:42.301282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:36.040 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.040 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:36.040 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.040 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.040 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:36.040 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.040 14:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:36.040 [2024-11-20 14:50:43.080906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.300 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:36.300 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:36.300 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:36.560 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:36.560 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:36.560 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:36.820 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5d0e9ad0-a857-4065-8a57-9fb5ca4c3c6c 00:30:36.820 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5d0e9ad0-a857-4065-8a57-9fb5ca4c3c6c lvol 20 00:30:37.079 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=901e443a-753b-4d06-bb3d-18c105f0f3ba 00:30:37.079 14:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:37.079 14:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 901e443a-753b-4d06-bb3d-18c105f0f3ba 00:30:37.338 14:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:37.597 [2024-11-20 14:50:44.400738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.597 14:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:37.597 14:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4136658 00:30:37.597 14:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:37.597 14:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:38.536 14:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 901e443a-753b-4d06-bb3d-18c105f0f3ba MY_SNAPSHOT 00:30:38.795 14:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=43552a46-fb7b-4cd7-95fe-d570ae105166 00:30:38.795 14:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 901e443a-753b-4d06-bb3d-18c105f0f3ba 30 00:30:39.056 14:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 43552a46-fb7b-4cd7-95fe-d570ae105166 MY_CLONE 00:30:39.316 14:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6407d2a8-be74-4ffc-8689-01abb5049137 00:30:39.316 14:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6407d2a8-be74-4ffc-8689-01abb5049137 00:30:39.576 14:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4136658 00:30:49.558 Initializing NVMe Controllers 00:30:49.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:49.558 Controller IO queue size 128, less than required. 00:30:49.558 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:49.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:49.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:49.558 Initialization complete. Launching workers. 00:30:49.558 ======================================================== 00:30:49.558 Latency(us) 00:30:49.558 Device Information : IOPS MiB/s Average min max 00:30:49.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15922.10 62.20 8040.37 1838.87 54225.22 00:30:49.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16343.30 63.84 7834.05 2165.23 54485.58 00:30:49.558 ======================================================== 00:30:49.558 Total : 32265.40 126.04 7935.86 1838.87 54485.58 00:30:49.558 00:30:49.558 14:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:49.558 14:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 901e443a-753b-4d06-bb3d-18c105f0f3ba 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d0e9ad0-a857-4065-8a57-9fb5ca4c3c6c 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.558 rmmod nvme_tcp 00:30:49.558 rmmod nvme_fabrics 00:30:49.558 rmmod nvme_keyring 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4136083 ']' 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4136083 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4136083 ']' 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4136083 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4136083 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4136083' 00:30:49.558 killing process with pid 4136083 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4136083 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4136083 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.558 14:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.939 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:50.939 00:30:50.939 real 0m21.062s 00:30:50.939 user 0m54.072s 00:30:50.939 sys 0m8.755s 00:30:50.939 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.939 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:50.939 ************************************ 00:30:50.939 END TEST nvmf_lvol 00:30:50.940 ************************************ 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:50.940 ************************************ 00:30:50.940 START TEST nvmf_lvs_grow 00:30:50.940 ************************************ 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:50.940 * Looking for test storage... 00:30:50.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:50.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.940 --rc genhtml_branch_coverage=1 00:30:50.940 --rc genhtml_function_coverage=1 00:30:50.940 --rc genhtml_legend=1 00:30:50.940 --rc geninfo_all_blocks=1 00:30:50.940 --rc geninfo_unexecuted_blocks=1 00:30:50.940 00:30:50.940 ' 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:50.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.940 --rc genhtml_branch_coverage=1 00:30:50.940 --rc genhtml_function_coverage=1 00:30:50.940 --rc genhtml_legend=1 00:30:50.940 --rc geninfo_all_blocks=1 00:30:50.940 --rc geninfo_unexecuted_blocks=1 00:30:50.940 00:30:50.940 ' 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:50.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.940 --rc genhtml_branch_coverage=1 00:30:50.940 --rc genhtml_function_coverage=1 00:30:50.940 --rc genhtml_legend=1 00:30:50.940 --rc geninfo_all_blocks=1 00:30:50.940 --rc geninfo_unexecuted_blocks=1 00:30:50.940 00:30:50.940 ' 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:50.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.940 --rc genhtml_branch_coverage=1 00:30:50.940 --rc genhtml_function_coverage=1 00:30:50.940 --rc genhtml_legend=1 00:30:50.940 --rc geninfo_all_blocks=1 00:30:50.940 --rc geninfo_unexecuted_blocks=1 00:30:50.940 00:30:50.940 ' 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.940 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:50.941 14:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:56.218 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:56.218 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:56.218 Found net devices under 0000:31:00.0: cvl_0_0 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:56.218 Found net devices under 0000:31:00.1: cvl_0_1 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:56.218 14:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:56.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:30:56.219 00:30:56.219 --- 10.0.0.2 ping statistics --- 00:30:56.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.219 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:30:56.219 00:30:56.219 --- 10.0.0.1 ping statistics --- 00:30:56.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.219 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4143432 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4143432 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4143432 ']' 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:56.219 [2024-11-20 14:51:03.084295] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:56.219 [2024-11-20 14:51:03.085288] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:30:56.219 [2024-11-20 14:51:03.085326] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.219 [2024-11-20 14:51:03.156057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.219 [2024-11-20 14:51:03.186166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.219 [2024-11-20 14:51:03.186194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.219 [2024-11-20 14:51:03.186200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.219 [2024-11-20 14:51:03.186205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.219 [2024-11-20 14:51:03.186209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.219 [2024-11-20 14:51:03.186638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.219 [2024-11-20 14:51:03.238029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:56.219 [2024-11-20 14:51:03.238223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:56.219 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:56.478 [2024-11-20 14:51:03.423355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:56.478 ************************************ 00:30:56.478 START TEST lvs_grow_clean 00:30:56.478 ************************************ 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:56.478 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:56.738 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:56.738 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:56.997 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=091afc5d-434e-47f0-b4e9-9d4037c5b837 00:30:56.998 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:30:56.998 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:56.998 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:56.998 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:56.998 14:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 lvol 150 00:30:57.257 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f4ff9d4d-4582-471a-bc26-74a900598f01 00:30:57.257 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:57.257 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:57.257 [2024-11-20 14:51:04.283026] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:57.257 [2024-11-20 14:51:04.283183] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:57.257 true 00:30:57.257 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:30:57.257 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:57.516 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:57.516 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:57.776 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f4ff9d4d-4582-471a-bc26-74a900598f01 00:30:57.776 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:58.034 [2024-11-20 14:51:04.919597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.034 14:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4143839 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4143839 /var/tmp/bdevperf.sock 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4143839 ']' 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:58.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.292 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:58.292 [2024-11-20 14:51:05.130426] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:30:58.292 [2024-11-20 14:51:05.130504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143839 ] 00:30:58.293 [2024-11-20 14:51:05.214577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.293 [2024-11-20 14:51:05.267104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.229 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.229 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:59.229 14:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:59.229 Nvme0n1 00:30:59.229 14:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:59.488 [ 00:30:59.488 { 00:30:59.488 "name": "Nvme0n1", 00:30:59.488 "aliases": [ 00:30:59.488 "f4ff9d4d-4582-471a-bc26-74a900598f01" 00:30:59.488 ], 00:30:59.488 "product_name": "NVMe disk", 00:30:59.488 "block_size": 4096, 00:30:59.488 "num_blocks": 38912, 00:30:59.488 "uuid": "f4ff9d4d-4582-471a-bc26-74a900598f01", 00:30:59.488 "numa_id": 0, 00:30:59.488 "assigned_rate_limits": { 00:30:59.488 "rw_ios_per_sec": 0, 00:30:59.488 "rw_mbytes_per_sec": 0, 00:30:59.488 "r_mbytes_per_sec": 0, 00:30:59.488 "w_mbytes_per_sec": 0 00:30:59.488 }, 00:30:59.488 "claimed": false, 00:30:59.488 "zoned": false, 00:30:59.488 "supported_io_types": { 00:30:59.488 "read": true, 00:30:59.488 "write": true, 00:30:59.488 "unmap": true, 00:30:59.488 "flush": true, 00:30:59.488 "reset": true, 00:30:59.488 "nvme_admin": true, 00:30:59.488 "nvme_io": true, 00:30:59.488 "nvme_io_md": false, 00:30:59.488 "write_zeroes": true, 00:30:59.488 "zcopy": false, 00:30:59.488 "get_zone_info": false, 00:30:59.488 "zone_management": false, 00:30:59.488 "zone_append": false, 00:30:59.488 "compare": true, 00:30:59.488 "compare_and_write": true, 00:30:59.488 "abort": true, 00:30:59.488 "seek_hole": false, 00:30:59.488 "seek_data": false, 00:30:59.488 "copy": true, 00:30:59.488 "nvme_iov_md": false 00:30:59.488 }, 00:30:59.488 "memory_domains": [ 00:30:59.488 { 00:30:59.488 "dma_device_id": "system", 00:30:59.488 "dma_device_type": 1 00:30:59.488 } 00:30:59.488 ], 00:30:59.488 "driver_specific": { 00:30:59.488 "nvme": [ 00:30:59.488 { 00:30:59.488 "trid": { 00:30:59.488 "trtype": "TCP", 00:30:59.488 "adrfam": "IPv4", 00:30:59.488 "traddr": "10.0.0.2", 00:30:59.488 "trsvcid": "4420", 00:30:59.488 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:59.488 }, 00:30:59.488 "ctrlr_data": { 00:30:59.488 "cntlid": 1, 00:30:59.488 "vendor_id": "0x8086", 00:30:59.488 "model_number": "SPDK bdev Controller", 00:30:59.488 "serial_number": "SPDK0", 00:30:59.488 "firmware_revision": "25.01", 00:30:59.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.488 "oacs": { 00:30:59.488 "security": 0, 00:30:59.488 "format": 0, 00:30:59.488 "firmware": 0, 00:30:59.488 "ns_manage": 0 00:30:59.488 }, 00:30:59.488 "multi_ctrlr": true, 00:30:59.488 "ana_reporting": false 00:30:59.488 }, 00:30:59.488 "vs": { 00:30:59.488 "nvme_version": "1.3" 00:30:59.488 }, 00:30:59.488 "ns_data": { 00:30:59.488 "id": 1, 00:30:59.488 "can_share": true 00:30:59.488 } 00:30:59.488 } 00:30:59.488 ], 00:30:59.488 "mp_policy": "active_passive" 00:30:59.488 } 00:30:59.488 } 00:30:59.488 ] 00:30:59.488 14:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4144153 00:30:59.488 14:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:59.488 14:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:59.488 Running I/O for 10 seconds... 00:31:00.423 Latency(us) 00:31:00.423 [2024-11-20T13:51:07.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:00.423 Nvme0n1 : 1.00 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:31:00.423 [2024-11-20T13:51:07.483Z] =================================================================================================================== 00:31:00.423 [2024-11-20T13:51:07.483Z] Total : 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:31:00.423 00:31:01.359 14:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:31:01.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:01.359 Nvme0n1 : 2.00 17594.50 68.73 0.00 0.00 0.00 0.00 0.00 00:31:01.359 [2024-11-20T13:51:08.419Z] =================================================================================================================== 00:31:01.359 [2024-11-20T13:51:08.419Z] Total : 17594.50 68.73 0.00 0.00 0.00 0.00 0.00 00:31:01.359 00:31:01.617 true 00:31:01.617 14:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:31:01.617 14:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:01.617 14:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:01.617 14:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:01.617 14:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4144153 00:31:02.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.553 Nvme0n1 : 3.00 17677.67 69.05 0.00 0.00 0.00 0.00 0.00 00:31:02.553 [2024-11-20T13:51:09.613Z] =================================================================================================================== 00:31:02.553 [2024-11-20T13:51:09.613Z] Total : 17677.67 69.05 0.00 0.00 0.00 0.00 0.00 00:31:02.553 00:31:03.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.491 Nvme0n1 : 4.00 18513.25 72.32 0.00 0.00 0.00 0.00 0.00 00:31:03.491 [2024-11-20T13:51:10.551Z] =================================================================================================================== 00:31:03.491 [2024-11-20T13:51:10.551Z] Total : 18513.25 72.32 0.00 0.00 0.00 0.00 0.00 00:31:03.491 00:31:04.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.430 Nvme0n1 : 5.00 19851.60 77.55 0.00 0.00 0.00 0.00 0.00 00:31:04.430 [2024-11-20T13:51:11.490Z] =================================================================================================================== 00:31:04.430 [2024-11-20T13:51:11.490Z] Total : 19851.60 77.55 0.00 0.00 0.00 0.00 0.00 00:31:04.430 00:31:05.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.366 Nvme0n1 : 6.00 20746.83 81.04 0.00 0.00 0.00 0.00 0.00 00:31:05.366 [2024-11-20T13:51:12.426Z] =================================================================================================================== 00:31:05.366 [2024-11-20T13:51:12.426Z] Total : 20746.83 81.04 0.00 0.00 0.00 0.00 0.00 00:31:05.366 00:31:06.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.745 Nvme0n1 : 7.00 21393.71 83.57 0.00 0.00 0.00 0.00 0.00 00:31:06.745 [2024-11-20T13:51:13.805Z] =================================================================================================================== 00:31:06.745 [2024-11-20T13:51:13.805Z] Total : 21393.71 83.57 0.00 0.00 0.00 0.00 0.00 00:31:06.745 00:31:07.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.706 Nvme0n1 : 8.00 21887.50 85.50 0.00 0.00 0.00 0.00 0.00 00:31:07.706 [2024-11-20T13:51:14.766Z] =================================================================================================================== 00:31:07.706 [2024-11-20T13:51:14.766Z] Total : 21887.50 85.50 0.00 0.00 0.00 0.00 0.00 00:31:07.706 00:31:08.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.361 Nvme0n1 : 9.00 22263.67 86.97 0.00 0.00 0.00 0.00 0.00 00:31:08.361 [2024-11-20T13:51:15.421Z] =================================================================================================================== 00:31:08.361 [2024-11-20T13:51:15.421Z] Total : 22263.67 86.97 0.00 0.00 0.00 0.00 0.00 00:31:08.361 00:31:09.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.741 Nvme0n1 : 10.00 22565.60 88.15 0.00 0.00 0.00 0.00 0.00 00:31:09.741 [2024-11-20T13:51:16.801Z] =================================================================================================================== 00:31:09.741 [2024-11-20T13:51:16.801Z] Total : 22565.60 88.15 0.00 0.00 0.00 0.00 0.00 00:31:09.741 00:31:09.741 00:31:09.741 Latency(us) 00:31:09.741 [2024-11-20T13:51:16.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.741 Nvme0n1 : 10.00 22567.35 88.15 0.00 0.00 5668.81 2293.76 13380.27 00:31:09.741 [2024-11-20T13:51:16.801Z] =================================================================================================================== 00:31:09.741 [2024-11-20T13:51:16.801Z] Total : 22567.35 88.15 0.00 0.00 5668.81 2293.76 13380.27 00:31:09.741 { 00:31:09.741 "results": [ 00:31:09.741 { 00:31:09.741 "job": "Nvme0n1", 00:31:09.741 "core_mask": "0x2", 00:31:09.741 "workload": "randwrite", 00:31:09.741 "status": "finished", 00:31:09.741 "queue_depth": 128, 00:31:09.741 "io_size": 4096, 00:31:09.741 "runtime": 10.004896, 00:31:09.741 "iops": 22567.35102493819, 00:31:09.741 "mibps": 88.15371494116481, 00:31:09.741 "io_failed": 0, 00:31:09.741 "io_timeout": 0, 00:31:09.741 "avg_latency_us": 5668.80871157094, 00:31:09.741 "min_latency_us": 2293.76, 00:31:09.741 "max_latency_us": 13380.266666666666 00:31:09.741 } 00:31:09.741 ], 00:31:09.741 "core_count": 1 00:31:09.741 } 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4143839 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4143839 ']' 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4143839 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4143839 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4143839' 00:31:09.741 killing process with pid 4143839 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4143839 00:31:09.741 Received shutdown signal, test time was about 10.000000 seconds 00:31:09.741 00:31:09.741 Latency(us) 00:31:09.741 [2024-11-20T13:51:16.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.741 [2024-11-20T13:51:16.801Z] =================================================================================================================== 00:31:09.741 [2024-11-20T13:51:16.801Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4143839 00:31:09.741 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:09.742 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.001 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:10.001 14:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:10.261 [2024-11-20 14:51:17.215095] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:10.261 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:31:10.521 request: 00:31:10.521 { 00:31:10.521 "uuid": "091afc5d-434e-47f0-b4e9-9d4037c5b837", 00:31:10.521 "method": "bdev_lvol_get_lvstores", 00:31:10.521 "req_id": 1 00:31:10.521 } 00:31:10.521 Got JSON-RPC error response 00:31:10.521 response: 00:31:10.521 { 00:31:10.521 "code": -19, 00:31:10.521 "message": "No such device" 00:31:10.521 } 00:31:10.521 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:10.521 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:10.521 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:10.521 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:10.521 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:10.521 aio_bdev 00:31:10.521 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f4ff9d4d-4582-471a-bc26-74a900598f01 00:31:10.521 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f4ff9d4d-4582-471a-bc26-74a900598f01 00:31:10.521 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:10.521 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:10.522 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:10.522 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:10.522 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:10.781 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f4ff9d4d-4582-471a-bc26-74a900598f01 -t 2000 00:31:11.041 [ 00:31:11.041 { 00:31:11.041 "name": "f4ff9d4d-4582-471a-bc26-74a900598f01", 00:31:11.041 "aliases": [ 00:31:11.041 "lvs/lvol" 00:31:11.041 ], 00:31:11.041 "product_name": "Logical Volume", 00:31:11.041 "block_size": 4096, 00:31:11.041 "num_blocks": 38912, 00:31:11.041 "uuid": "f4ff9d4d-4582-471a-bc26-74a900598f01", 00:31:11.041 "assigned_rate_limits": { 00:31:11.041 "rw_ios_per_sec": 0, 00:31:11.041 "rw_mbytes_per_sec": 0, 00:31:11.041 "r_mbytes_per_sec": 0, 00:31:11.041 "w_mbytes_per_sec": 0 00:31:11.041 }, 00:31:11.041 "claimed": false, 00:31:11.041 "zoned": false, 00:31:11.041 "supported_io_types": { 00:31:11.041 "read": true, 00:31:11.041 "write": true, 00:31:11.041 "unmap": true, 00:31:11.041 "flush": false, 00:31:11.041 "reset": true, 00:31:11.041 "nvme_admin": false, 00:31:11.041 "nvme_io": false, 00:31:11.041 "nvme_io_md": false, 00:31:11.041 "write_zeroes": true, 00:31:11.041 "zcopy": false, 00:31:11.041 "get_zone_info": false, 00:31:11.041 "zone_management": false, 00:31:11.041 "zone_append": false, 00:31:11.041 "compare": false, 00:31:11.041 "compare_and_write": false, 00:31:11.041 "abort": false, 00:31:11.041 "seek_hole": true, 00:31:11.041 "seek_data": true, 00:31:11.041 "copy": false, 00:31:11.041 "nvme_iov_md": false 00:31:11.041 }, 00:31:11.041 "driver_specific": { 00:31:11.041 "lvol": { 00:31:11.041 "lvol_store_uuid": "091afc5d-434e-47f0-b4e9-9d4037c5b837", 00:31:11.041 "base_bdev": "aio_bdev", 00:31:11.041 "thin_provision": false, 00:31:11.041 "num_allocated_clusters": 38, 00:31:11.041 "snapshot": false, 00:31:11.041 "clone": false, 00:31:11.041 "esnap_clone": false 00:31:11.041 } 00:31:11.041 } 00:31:11.041 } 00:31:11.041 ] 00:31:11.041 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:11.041 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:11.041 14:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:31:11.041 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:11.041 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:31:11.041 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:11.302 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:11.302 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f4ff9d4d-4582-471a-bc26-74a900598f01 00:31:11.302 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 091afc5d-434e-47f0-b4e9-9d4037c5b837 00:31:11.562 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:11.823 00:31:11.823 real 0m15.219s 00:31:11.823 user 0m14.835s 00:31:11.823 sys 0m1.253s 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:11.823 ************************************ 00:31:11.823 END TEST lvs_grow_clean 00:31:11.823 ************************************ 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:11.823 ************************************ 00:31:11.823 START TEST lvs_grow_dirty 00:31:11.823 ************************************ 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:11.823 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:12.083 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:12.083 14:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:12.083 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:12.084 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:12.084 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:12.342 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:12.342 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:12.342 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f lvol 150 00:31:12.342 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cd42b476-2fdd-4924-999f-cb438a3d8048 00:31:12.342 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:12.342 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:12.602 [2024-11-20 14:51:19.527028] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:12.602 [2024-11-20 14:51:19.527173] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:12.602 true 00:31:12.602 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:12.602 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:12.862 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:12.862 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:12.862 14:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cd42b476-2fdd-4924-999f-cb438a3d8048 00:31:13.121 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:13.121 [2024-11-20 14:51:20.143577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.121 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4147650 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4147650 /var/tmp/bdevperf.sock 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4147650 ']' 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:13.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:13.381 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:13.381 [2024-11-20 14:51:20.341435] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:31:13.381 [2024-11-20 14:51:20.341479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147650 ] 00:31:13.381 [2024-11-20 14:51:20.397096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.381 [2024-11-20 14:51:20.426895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.641 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.641 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:13.641 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:13.900 Nvme0n1 00:31:13.901 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:13.901 [ 00:31:13.901 { 00:31:13.901 "name": "Nvme0n1", 00:31:13.901 "aliases": [ 00:31:13.901 "cd42b476-2fdd-4924-999f-cb438a3d8048" 00:31:13.901 ], 00:31:13.901 "product_name": "NVMe disk", 00:31:13.901 "block_size": 4096, 00:31:13.901 "num_blocks": 38912, 00:31:13.901 "uuid": "cd42b476-2fdd-4924-999f-cb438a3d8048", 00:31:13.901 "numa_id": 0, 00:31:13.901 "assigned_rate_limits": { 00:31:13.901 "rw_ios_per_sec": 0, 00:31:13.901 "rw_mbytes_per_sec": 0, 00:31:13.901 "r_mbytes_per_sec": 0, 00:31:13.901 "w_mbytes_per_sec": 0 00:31:13.901 }, 00:31:13.901 "claimed": false, 00:31:13.901 "zoned": false, 00:31:13.901 "supported_io_types": { 00:31:13.901 "read": true, 00:31:13.901 "write": true, 00:31:13.901 "unmap": true, 00:31:13.901 "flush": true, 00:31:13.901 "reset": true, 00:31:13.901 "nvme_admin": true, 00:31:13.901 "nvme_io": true, 00:31:13.901 "nvme_io_md": false, 00:31:13.901 "write_zeroes": true, 00:31:13.901 "zcopy": false, 00:31:13.901 "get_zone_info": false, 00:31:13.901 "zone_management": false, 00:31:13.901 "zone_append": false, 00:31:13.901 "compare": true, 00:31:13.901 "compare_and_write": true, 00:31:13.901 "abort": true, 00:31:13.901 "seek_hole": false, 00:31:13.901 "seek_data": false, 00:31:13.901 "copy": true, 00:31:13.901 "nvme_iov_md": false 00:31:13.901 }, 00:31:13.901 "memory_domains": [ 00:31:13.901 { 00:31:13.901 "dma_device_id": "system", 00:31:13.901 "dma_device_type": 1 00:31:13.901 } 00:31:13.901 ], 00:31:13.901 "driver_specific": { 00:31:13.901 "nvme": [ 00:31:13.901 { 00:31:13.901 "trid": { 00:31:13.901 "trtype": "TCP", 00:31:13.901 "adrfam": "IPv4", 00:31:13.901 "traddr": "10.0.0.2", 00:31:13.901 "trsvcid": "4420", 00:31:13.901 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:13.901 }, 00:31:13.901 "ctrlr_data": { 00:31:13.901 "cntlid": 1, 00:31:13.901 "vendor_id": "0x8086", 00:31:13.901 "model_number": "SPDK bdev Controller", 00:31:13.901 "serial_number": "SPDK0", 00:31:13.901 "firmware_revision": "25.01", 00:31:13.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.901 "oacs": { 00:31:13.901 "security": 0, 00:31:13.901 "format": 0, 00:31:13.901 "firmware": 0, 00:31:13.901 "ns_manage": 0 00:31:13.901 }, 00:31:13.901 "multi_ctrlr": true, 00:31:13.901 "ana_reporting": false 00:31:13.901 }, 00:31:13.901 "vs": { 00:31:13.901 "nvme_version": "1.3" 00:31:13.901 }, 00:31:13.901 "ns_data": { 00:31:13.901 "id": 1, 00:31:13.901 "can_share": true 00:31:13.901 } 00:31:13.901 } 00:31:13.901 ], 00:31:13.901 "mp_policy": "active_passive" 00:31:13.901 } 00:31:13.901 } 00:31:13.901 ] 00:31:13.901 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4147901 00:31:13.901 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:13.901 14:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:13.901 Running I/O for 10 seconds... 00:31:15.284 Latency(us) 00:31:15.284 [2024-11-20T13:51:22.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:15.284 Nvme0n1 : 1.00 24840.00 97.03 0.00 0.00 0.00 0.00 0.00 00:31:15.284 [2024-11-20T13:51:22.344Z] =================================================================================================================== 00:31:15.284 [2024-11-20T13:51:22.344Z] Total : 24840.00 97.03 0.00 0.00 0.00 0.00 0.00 00:31:15.284 00:31:15.852 14:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:16.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:16.112 Nvme0n1 : 2.00 24993.00 97.63 0.00 0.00 0.00 0.00 0.00 00:31:16.112 [2024-11-20T13:51:23.172Z] =================================================================================================================== 00:31:16.112 [2024-11-20T13:51:23.172Z] Total : 24993.00 97.63 0.00 0.00 0.00 0.00 0.00 00:31:16.112 00:31:16.112 true 00:31:16.112 14:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:16.112 14:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:16.371 14:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:16.371 14:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:16.371 14:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4147901 00:31:16.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:16.941 Nvme0n1 : 3.00 25045.33 97.83 0.00 0.00 0.00 0.00 0.00 00:31:16.941 [2024-11-20T13:51:24.001Z] =================================================================================================================== 00:31:16.941 [2024-11-20T13:51:24.001Z] Total : 25045.33 97.83 0.00 0.00 0.00 0.00 0.00 00:31:16.941 00:31:18.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:18.320 Nvme0n1 : 4.00 25102.25 98.06 0.00 0.00 0.00 0.00 0.00 00:31:18.320 [2024-11-20T13:51:25.380Z] =================================================================================================================== 00:31:18.320 [2024-11-20T13:51:25.380Z] Total : 25102.25 98.06 0.00 0.00 0.00 0.00 0.00 00:31:18.320 00:31:19.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:19.261 Nvme0n1 : 5.00 25124.00 98.14 0.00 0.00 0.00 0.00 0.00 00:31:19.261 [2024-11-20T13:51:26.321Z] =================================================================================================================== 00:31:19.261 [2024-11-20T13:51:26.321Z] Total : 25124.00 98.14 0.00 0.00 0.00 0.00 0.00 00:31:19.261 00:31:20.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:20.206 Nvme0n1 : 6.00 25160.17 98.28 0.00 0.00 0.00 0.00 0.00 00:31:20.206 [2024-11-20T13:51:27.266Z] =================================================================================================================== 00:31:20.206 [2024-11-20T13:51:27.266Z] Total : 25160.17 98.28 0.00 0.00 0.00 0.00 0.00 00:31:20.206 00:31:21.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:21.147 Nvme0n1 : 7.00 25172.29 98.33 0.00 0.00 0.00 0.00 0.00 00:31:21.147 [2024-11-20T13:51:28.207Z] =================================================================================================================== 00:31:21.147 [2024-11-20T13:51:28.207Z] Total : 25172.29 98.33 0.00 0.00 0.00 0.00 0.00 00:31:21.147 00:31:22.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:22.084 Nvme0n1 : 8.00 25185.25 98.38 0.00 0.00 0.00 0.00 0.00 00:31:22.084 [2024-11-20T13:51:29.144Z] =================================================================================================================== 00:31:22.084 [2024-11-20T13:51:29.144Z] Total : 25185.25 98.38 0.00 0.00 0.00 0.00 0.00 00:31:22.084 00:31:23.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:23.029 Nvme0n1 : 9.00 25195.11 98.42 0.00 0.00 0.00 0.00 0.00 00:31:23.029 [2024-11-20T13:51:30.089Z] =================================================================================================================== 00:31:23.029 [2024-11-20T13:51:30.089Z] Total : 25195.11 98.42 0.00 0.00 0.00 0.00 0.00 00:31:23.029 00:31:23.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:23.966 Nvme0n1 : 10.00 25202.90 98.45 0.00 0.00 0.00 0.00 0.00 00:31:23.966 [2024-11-20T13:51:31.026Z] =================================================================================================================== 00:31:23.966 [2024-11-20T13:51:31.026Z] Total : 25202.90 98.45 0.00 0.00 0.00 0.00 0.00 00:31:23.966 00:31:23.966 00:31:23.966 Latency(us) 00:31:23.966 [2024-11-20T13:51:31.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:23.966 Nvme0n1 : 10.00 25204.36 98.45 0.00 0.00 5075.62 1815.89 10267.31 00:31:23.966 [2024-11-20T13:51:31.026Z] =================================================================================================================== 00:31:23.966 [2024-11-20T13:51:31.026Z] Total : 25204.36 98.45 0.00 0.00 5075.62 1815.89 10267.31 00:31:23.966 { 00:31:23.966 "results": [ 00:31:23.966 { 00:31:23.966 "job": "Nvme0n1", 00:31:23.966 "core_mask": "0x2", 00:31:23.966 "workload": "randwrite", 00:31:23.966 "status": "finished", 00:31:23.966 "queue_depth": 128, 00:31:23.966 "io_size": 4096, 00:31:23.966 "runtime": 10.004501, 00:31:23.966 "iops": 25204.355519580637, 00:31:23.966 "mibps": 98.45451374836186, 00:31:23.966 "io_failed": 0, 00:31:23.966 "io_timeout": 0, 00:31:23.966 "avg_latency_us": 5075.623593872072, 00:31:23.966 "min_latency_us": 1815.8933333333334, 00:31:23.966 "max_latency_us": 10267.306666666667 00:31:23.966 } 00:31:23.966 ], 00:31:23.966 "core_count": 1 00:31:23.966 } 00:31:23.966 14:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4147650 00:31:23.966 14:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4147650 ']' 00:31:23.966 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4147650 00:31:23.966 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:23.966 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:23.966 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4147650 00:31:24.226 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:24.226 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:24.226 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4147650' 00:31:24.226 killing process with pid 4147650 00:31:24.226 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4147650 00:31:24.226 Received shutdown signal, test time was about 10.000000 seconds 00:31:24.226 00:31:24.226 Latency(us) 00:31:24.226 [2024-11-20T13:51:31.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.226 [2024-11-20T13:51:31.286Z] =================================================================================================================== 00:31:24.226 [2024-11-20T13:51:31.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:24.226 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4147650 00:31:24.226 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:24.486 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:24.486 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:24.486 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4143432 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4143432 00:31:24.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4143432 Killed "${NVMF_APP[@]}" "$@" 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4150213 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4150213 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4150213 ']' 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.746 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:24.747 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:24.747 14:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:24.747 [2024-11-20 14:51:31.721255] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:24.747 [2024-11-20 14:51:31.722240] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:31:24.747 [2024-11-20 14:51:31.722287] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.747 [2024-11-20 14:51:31.795181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.006 [2024-11-20 14:51:31.824741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.006 [2024-11-20 14:51:31.824769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.006 [2024-11-20 14:51:31.824776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.006 [2024-11-20 14:51:31.824780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.006 [2024-11-20 14:51:31.824784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.006 [2024-11-20 14:51:31.825277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.006 [2024-11-20 14:51:31.876597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:25.006 [2024-11-20 14:51:31.876794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:25.575 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:25.575 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:25.575 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:25.575 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:25.575 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:25.575 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.575 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:25.835 [2024-11-20 14:51:32.663842] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:25.835 [2024-11-20 14:51:32.663918] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:25.835 [2024-11-20 14:51:32.663941] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:25.835 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:25.835 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cd42b476-2fdd-4924-999f-cb438a3d8048 00:31:25.835 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cd42b476-2fdd-4924-999f-cb438a3d8048 00:31:25.835 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:25.835 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:25.835 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:25.835 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:25.835 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:25.835 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cd42b476-2fdd-4924-999f-cb438a3d8048 -t 2000 00:31:26.095 [ 00:31:26.095 { 00:31:26.095 "name": "cd42b476-2fdd-4924-999f-cb438a3d8048", 00:31:26.095 "aliases": [ 00:31:26.095 "lvs/lvol" 00:31:26.095 ], 00:31:26.095 "product_name": "Logical Volume", 00:31:26.095 "block_size": 4096, 00:31:26.095 "num_blocks": 38912, 00:31:26.095 "uuid": "cd42b476-2fdd-4924-999f-cb438a3d8048", 00:31:26.095 "assigned_rate_limits": { 00:31:26.095 "rw_ios_per_sec": 0, 00:31:26.095 "rw_mbytes_per_sec": 0, 00:31:26.095 "r_mbytes_per_sec": 0, 00:31:26.095 "w_mbytes_per_sec": 0 00:31:26.095 }, 00:31:26.095 "claimed": false, 00:31:26.095 "zoned": false, 00:31:26.095 "supported_io_types": { 00:31:26.095 "read": true, 00:31:26.095 "write": true, 00:31:26.095 "unmap": true, 00:31:26.095 "flush": false, 00:31:26.095 "reset": true, 00:31:26.095 "nvme_admin": false, 00:31:26.095 "nvme_io": false, 00:31:26.095 "nvme_io_md": false, 00:31:26.095 "write_zeroes": true, 00:31:26.095 "zcopy": false, 00:31:26.095 "get_zone_info": false, 00:31:26.095 "zone_management": false, 00:31:26.095 "zone_append": false, 00:31:26.095 "compare": false, 00:31:26.095 "compare_and_write": false, 00:31:26.095 "abort": false, 00:31:26.095 "seek_hole": true, 00:31:26.095 "seek_data": true, 00:31:26.095 "copy": false, 00:31:26.095 "nvme_iov_md": false 00:31:26.095 }, 00:31:26.095 "driver_specific": { 00:31:26.095 "lvol": { 00:31:26.095 "lvol_store_uuid": "8c1a32ff-3dc8-4b64-9581-619b67ffb51f", 00:31:26.095 "base_bdev": "aio_bdev", 00:31:26.095 "thin_provision": false, 00:31:26.095 "num_allocated_clusters": 38, 00:31:26.095 "snapshot": false, 00:31:26.095 "clone": false, 00:31:26.095 "esnap_clone": false 00:31:26.095 } 00:31:26.095 } 00:31:26.095 } 00:31:26.095 ] 00:31:26.095 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:26.095 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:26.095 14:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:26.095 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:26.095 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:26.095 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:26.355 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:26.355 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:26.614 [2024-11-20 14:51:33.437755] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:26.614 request: 00:31:26.614 { 00:31:26.614 "uuid": "8c1a32ff-3dc8-4b64-9581-619b67ffb51f", 00:31:26.614 "method": "bdev_lvol_get_lvstores", 00:31:26.614 "req_id": 1 00:31:26.614 } 00:31:26.614 Got JSON-RPC error response 00:31:26.614 response: 00:31:26.614 { 00:31:26.614 "code": -19, 00:31:26.614 "message": "No such device" 00:31:26.614 } 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:26.614 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:26.874 aio_bdev 00:31:26.874 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cd42b476-2fdd-4924-999f-cb438a3d8048 00:31:26.874 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cd42b476-2fdd-4924-999f-cb438a3d8048 00:31:26.874 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:26.874 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:26.874 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:26.874 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:26.874 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:26.874 14:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cd42b476-2fdd-4924-999f-cb438a3d8048 -t 2000 00:31:27.134 [ 00:31:27.134 { 00:31:27.134 "name": "cd42b476-2fdd-4924-999f-cb438a3d8048", 00:31:27.134 "aliases": [ 00:31:27.134 "lvs/lvol" 00:31:27.134 ], 00:31:27.134 "product_name": "Logical Volume", 00:31:27.134 "block_size": 4096, 00:31:27.134 "num_blocks": 38912, 00:31:27.134 "uuid": "cd42b476-2fdd-4924-999f-cb438a3d8048", 00:31:27.134 "assigned_rate_limits": { 00:31:27.134 "rw_ios_per_sec": 0, 00:31:27.134 "rw_mbytes_per_sec": 0, 00:31:27.134 "r_mbytes_per_sec": 0, 00:31:27.134 "w_mbytes_per_sec": 0 00:31:27.134 }, 00:31:27.134 "claimed": false, 00:31:27.134 "zoned": false, 00:31:27.134 "supported_io_types": { 00:31:27.134 "read": true, 00:31:27.134 "write": true, 00:31:27.134 "unmap": true, 00:31:27.134 "flush": false, 00:31:27.134 "reset": true, 00:31:27.134 "nvme_admin": false, 00:31:27.134 "nvme_io": false, 00:31:27.134 "nvme_io_md": false, 00:31:27.134 "write_zeroes": true, 00:31:27.134 "zcopy": false, 00:31:27.134 "get_zone_info": false, 00:31:27.134 "zone_management": false, 00:31:27.134 "zone_append": false, 00:31:27.134 "compare": false, 00:31:27.134 "compare_and_write": false, 00:31:27.134 "abort": false, 00:31:27.134 "seek_hole": true, 00:31:27.134 "seek_data": true, 00:31:27.134 "copy": false, 00:31:27.134 "nvme_iov_md": false 00:31:27.134 }, 00:31:27.134 "driver_specific": { 00:31:27.134 "lvol": { 00:31:27.134 "lvol_store_uuid": "8c1a32ff-3dc8-4b64-9581-619b67ffb51f", 00:31:27.134 "base_bdev": "aio_bdev", 00:31:27.134 "thin_provision": false, 00:31:27.134 "num_allocated_clusters": 38, 00:31:27.134 "snapshot": false, 00:31:27.134 "clone": false, 00:31:27.134 "esnap_clone": false 00:31:27.134 } 00:31:27.134 } 00:31:27.134 } 00:31:27.134 ] 00:31:27.134 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:27.134 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:27.134 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:27.395 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:27.395 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:27.395 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:27.395 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:27.395 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cd42b476-2fdd-4924-999f-cb438a3d8048 00:31:27.654 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c1a32ff-3dc8-4b64-9581-619b67ffb51f 00:31:27.654 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:27.913 00:31:27.913 real 0m16.152s 00:31:27.913 user 0m33.868s 00:31:27.913 sys 0m2.709s 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:27.913 ************************************ 00:31:27.913 END TEST lvs_grow_dirty 00:31:27.913 ************************************ 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:27.913 nvmf_trace.0 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:27.913 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:27.913 rmmod nvme_tcp 00:31:27.913 rmmod nvme_fabrics 00:31:28.171 rmmod nvme_keyring 00:31:28.171 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:28.172 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:28.172 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:28.172 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4150213 ']' 00:31:28.172 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4150213 00:31:28.172 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4150213 ']' 00:31:28.172 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4150213 00:31:28.172 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:28.172 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:28.172 14:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4150213 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4150213' 00:31:28.172 killing process with pid 4150213 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4150213 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4150213 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.172 14:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:30.710 00:31:30.710 real 0m39.573s 00:31:30.710 user 0m50.583s 00:31:30.710 sys 0m8.322s 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:30.710 ************************************ 00:31:30.710 END TEST nvmf_lvs_grow 00:31:30.710 ************************************ 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:30.710 ************************************ 00:31:30.710 START TEST nvmf_bdev_io_wait 00:31:30.710 ************************************ 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:30.710 * Looking for test storage... 00:31:30.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:30.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.710 --rc genhtml_branch_coverage=1 00:31:30.710 --rc genhtml_function_coverage=1 00:31:30.710 --rc genhtml_legend=1 00:31:30.710 --rc geninfo_all_blocks=1 00:31:30.710 --rc geninfo_unexecuted_blocks=1 00:31:30.710 00:31:30.710 ' 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:30.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.710 --rc genhtml_branch_coverage=1 00:31:30.710 --rc genhtml_function_coverage=1 00:31:30.710 --rc genhtml_legend=1 00:31:30.710 --rc geninfo_all_blocks=1 00:31:30.710 --rc geninfo_unexecuted_blocks=1 00:31:30.710 00:31:30.710 ' 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:30.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.710 --rc genhtml_branch_coverage=1 00:31:30.710 --rc genhtml_function_coverage=1 00:31:30.710 --rc genhtml_legend=1 00:31:30.710 --rc geninfo_all_blocks=1 00:31:30.710 --rc geninfo_unexecuted_blocks=1 00:31:30.710 00:31:30.710 ' 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:30.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.710 --rc genhtml_branch_coverage=1 00:31:30.710 --rc genhtml_function_coverage=1 00:31:30.710 --rc genhtml_legend=1 00:31:30.710 --rc geninfo_all_blocks=1 00:31:30.710 --rc geninfo_unexecuted_blocks=1 00:31:30.710 00:31:30.710 ' 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:30.710 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:30.711 14:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:35.992 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.992 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.992 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.992 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.992 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.992 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:35.993 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:35.993 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:35.993 Found net devices under 0000:31:00.0: cvl_0_0 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:35.993 Found net devices under 0000:31:00.1: cvl_0_1 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:31:35.993 00:31:35.993 --- 10.0.0.2 ping statistics --- 00:31:35.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.993 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:31:35.993 00:31:35.993 --- 10.0.0.1 ping statistics --- 00:31:35.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.993 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:35.993 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4155323 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4155323 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4155323 ']' 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.994 14:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:35.994 [2024-11-20 14:51:42.742690] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:35.994 [2024-11-20 14:51:42.743668] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:31:35.994 [2024-11-20 14:51:42.743707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.994 [2024-11-20 14:51:42.828422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:35.994 [2024-11-20 14:51:42.866566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.994 [2024-11-20 14:51:42.866597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.994 [2024-11-20 14:51:42.866605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.994 [2024-11-20 14:51:42.866612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.994 [2024-11-20 14:51:42.866618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.994 [2024-11-20 14:51:42.868129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.994 [2024-11-20 14:51:42.868293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.994 [2024-11-20 14:51:42.868366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.994 [2024-11-20 14:51:42.868367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.994 [2024-11-20 14:51:42.868752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:36.564 [2024-11-20 14:51:43.608147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.564 [2024-11-20 14:51:43.609073] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:36.564 [2024-11-20 14:51:43.609092] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:36.564 [2024-11-20 14:51:43.609320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.564 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:36.564 [2024-11-20 14:51:43.616987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:36.826 Malloc0 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:36.826 [2024-11-20 14:51:43.669144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4155411 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4155412 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4155414 00:31:36.826 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:36.827 { 00:31:36.827 "params": { 00:31:36.827 "name": "Nvme$subsystem", 00:31:36.827 "trtype": "$TEST_TRANSPORT", 00:31:36.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.827 "adrfam": "ipv4", 00:31:36.827 "trsvcid": "$NVMF_PORT", 00:31:36.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.827 "hdgst": ${hdgst:-false}, 00:31:36.827 "ddgst": ${ddgst:-false} 00:31:36.827 }, 00:31:36.827 "method": "bdev_nvme_attach_controller" 00:31:36.827 } 00:31:36.827 EOF 00:31:36.827 )") 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4155416 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:36.827 { 00:31:36.827 "params": { 00:31:36.827 "name": "Nvme$subsystem", 00:31:36.827 "trtype": "$TEST_TRANSPORT", 00:31:36.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.827 "adrfam": "ipv4", 00:31:36.827 "trsvcid": "$NVMF_PORT", 00:31:36.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.827 "hdgst": ${hdgst:-false}, 00:31:36.827 "ddgst": ${ddgst:-false} 00:31:36.827 }, 00:31:36.827 "method": "bdev_nvme_attach_controller" 00:31:36.827 } 00:31:36.827 EOF 00:31:36.827 )") 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:36.827 { 00:31:36.827 "params": { 00:31:36.827 "name": "Nvme$subsystem", 00:31:36.827 "trtype": "$TEST_TRANSPORT", 00:31:36.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.827 "adrfam": "ipv4", 00:31:36.827 "trsvcid": "$NVMF_PORT", 00:31:36.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.827 "hdgst": ${hdgst:-false}, 00:31:36.827 "ddgst": ${ddgst:-false} 00:31:36.827 }, 00:31:36.827 "method": "bdev_nvme_attach_controller" 00:31:36.827 } 00:31:36.827 EOF 00:31:36.827 )") 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:36.827 { 00:31:36.827 "params": { 00:31:36.827 "name": "Nvme$subsystem", 00:31:36.827 "trtype": "$TEST_TRANSPORT", 00:31:36.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.827 "adrfam": "ipv4", 00:31:36.827 "trsvcid": "$NVMF_PORT", 00:31:36.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.827 "hdgst": ${hdgst:-false}, 00:31:36.827 "ddgst": ${ddgst:-false} 00:31:36.827 }, 00:31:36.827 "method": "bdev_nvme_attach_controller" 00:31:36.827 } 00:31:36.827 EOF 00:31:36.827 )") 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4155411 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:36.827 "params": { 00:31:36.827 "name": "Nvme1", 00:31:36.827 "trtype": "tcp", 00:31:36.827 "traddr": "10.0.0.2", 00:31:36.827 "adrfam": "ipv4", 00:31:36.827 "trsvcid": "4420", 00:31:36.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.827 "hdgst": false, 00:31:36.827 "ddgst": false 00:31:36.827 }, 00:31:36.827 "method": "bdev_nvme_attach_controller" 00:31:36.827 }' 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:36.827 "params": { 00:31:36.827 "name": "Nvme1", 00:31:36.827 "trtype": "tcp", 00:31:36.827 "traddr": "10.0.0.2", 00:31:36.827 "adrfam": "ipv4", 00:31:36.827 "trsvcid": "4420", 00:31:36.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.827 "hdgst": false, 00:31:36.827 "ddgst": false 00:31:36.827 }, 00:31:36.827 "method": "bdev_nvme_attach_controller" 00:31:36.827 }' 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:36.827 "params": { 00:31:36.827 "name": "Nvme1", 00:31:36.827 "trtype": "tcp", 00:31:36.827 "traddr": "10.0.0.2", 00:31:36.827 "adrfam": "ipv4", 00:31:36.827 "trsvcid": "4420", 00:31:36.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.827 "hdgst": false, 00:31:36.827 "ddgst": false 00:31:36.827 }, 00:31:36.827 "method": "bdev_nvme_attach_controller" 00:31:36.827 }' 00:31:36.827 14:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:36.827 "params": { 00:31:36.827 "name": "Nvme1", 00:31:36.827 "trtype": "tcp", 00:31:36.827 "traddr": "10.0.0.2", 00:31:36.827 "adrfam": "ipv4", 00:31:36.827 "trsvcid": "4420", 00:31:36.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.827 "hdgst": false, 00:31:36.827 "ddgst": false 00:31:36.827 }, 00:31:36.827 "method": "bdev_nvme_attach_controller" 00:31:36.827 }' 00:31:36.827 [2024-11-20 14:51:43.708670] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:31:36.827 [2024-11-20 14:51:43.708725] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:36.827 [2024-11-20 14:51:43.713106] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:31:36.827 [2024-11-20 14:51:43.713107] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:31:36.827 [2024-11-20 14:51:43.713182] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-11-20 14:51:43.713185] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:31:36.827 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:36.827 [2024-11-20 14:51:43.714278] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:31:36.827 [2024-11-20 14:51:43.714344] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:37.087 [2024-11-20 14:51:43.890090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.087 [2024-11-20 14:51:43.931381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:37.087 [2024-11-20 14:51:43.942820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.087 [2024-11-20 14:51:43.981423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:37.087 [2024-11-20 14:51:44.031216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.087 [2024-11-20 14:51:44.070342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:37.087 [2024-11-20 14:51:44.113995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.347 [2024-11-20 14:51:44.157021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:37.347 Running I/O for 1 seconds... 00:31:37.347 Running I/O for 1 seconds... 00:31:37.347 Running I/O for 1 seconds... 00:31:37.347 Running I/O for 1 seconds... 00:31:38.286 7231.00 IOPS, 28.25 MiB/s 00:31:38.286 Latency(us) 00:31:38.286 [2024-11-20T13:51:45.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.286 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:38.286 Nvme1n1 : 1.02 7249.14 28.32 0.00 0.00 17501.56 3126.61 23702.19 00:31:38.286 [2024-11-20T13:51:45.346Z] =================================================================================================================== 00:31:38.286 [2024-11-20T13:51:45.346Z] Total : 7249.14 28.32 0.00 0.00 17501.56 3126.61 23702.19 00:31:38.286 181888.00 IOPS, 710.50 MiB/s 00:31:38.286 Latency(us) 00:31:38.286 [2024-11-20T13:51:45.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.286 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:38.286 Nvme1n1 : 1.00 181527.68 709.09 0.00 0.00 701.42 296.96 1966.08 00:31:38.286 [2024-11-20T13:51:45.346Z] =================================================================================================================== 00:31:38.286 [2024-11-20T13:51:45.346Z] Total : 181527.68 709.09 0.00 0.00 701.42 296.96 1966.08 00:31:38.286 6845.00 IOPS, 26.74 MiB/s 00:31:38.286 Latency(us) 00:31:38.286 [2024-11-20T13:51:45.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.286 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:38.286 Nvme1n1 : 1.01 6931.89 27.08 0.00 0.00 18401.79 5324.80 30583.47 00:31:38.286 [2024-11-20T13:51:45.346Z] =================================================================================================================== 00:31:38.286 [2024-11-20T13:51:45.346Z] Total : 6931.89 27.08 0.00 0.00 18401.79 5324.80 30583.47 00:31:38.286 10646.00 IOPS, 41.59 MiB/s 00:31:38.286 Latency(us) 00:31:38.286 [2024-11-20T13:51:45.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.286 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:38.286 Nvme1n1 : 1.01 10724.08 41.89 0.00 0.00 11894.37 2252.80 18131.63 00:31:38.286 [2024-11-20T13:51:45.346Z] =================================================================================================================== 00:31:38.286 [2024-11-20T13:51:45.346Z] Total : 10724.08 41.89 0.00 0.00 11894.37 2252.80 18131.63 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4155412 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4155414 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4155416 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.546 rmmod nvme_tcp 00:31:38.546 rmmod nvme_fabrics 00:31:38.546 rmmod nvme_keyring 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4155323 ']' 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4155323 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4155323 ']' 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4155323 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4155323 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4155323' 00:31:38.546 killing process with pid 4155323 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4155323 00:31:38.546 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4155323 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.806 14:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.713 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.713 00:31:40.713 real 0m10.523s 00:31:40.713 user 0m14.299s 00:31:40.713 sys 0m5.850s 00:31:40.713 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.713 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.713 ************************************ 00:31:40.713 END TEST nvmf_bdev_io_wait 00:31:40.713 ************************************ 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.974 ************************************ 00:31:40.974 START TEST nvmf_queue_depth 00:31:40.974 ************************************ 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:40.974 * Looking for test storage... 00:31:40.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.974 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.975 --rc genhtml_branch_coverage=1 00:31:40.975 --rc genhtml_function_coverage=1 00:31:40.975 --rc genhtml_legend=1 00:31:40.975 --rc geninfo_all_blocks=1 00:31:40.975 --rc geninfo_unexecuted_blocks=1 00:31:40.975 00:31:40.975 ' 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.975 --rc genhtml_branch_coverage=1 00:31:40.975 --rc genhtml_function_coverage=1 00:31:40.975 --rc genhtml_legend=1 00:31:40.975 --rc geninfo_all_blocks=1 00:31:40.975 --rc geninfo_unexecuted_blocks=1 00:31:40.975 00:31:40.975 ' 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.975 --rc genhtml_branch_coverage=1 00:31:40.975 --rc genhtml_function_coverage=1 00:31:40.975 --rc genhtml_legend=1 00:31:40.975 --rc geninfo_all_blocks=1 00:31:40.975 --rc geninfo_unexecuted_blocks=1 00:31:40.975 00:31:40.975 ' 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.975 --rc genhtml_branch_coverage=1 00:31:40.975 --rc genhtml_function_coverage=1 00:31:40.975 --rc genhtml_legend=1 00:31:40.975 --rc geninfo_all_blocks=1 00:31:40.975 --rc geninfo_unexecuted_blocks=1 00:31:40.975 00:31:40.975 ' 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.975 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.976 14:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:47.547 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:47.547 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:47.547 Found net devices under 0000:31:00.0: cvl_0_0 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.547 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:47.548 Found net devices under 0000:31:00.1: cvl_0_1 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:47.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:31:47.548 00:31:47.548 --- 10.0.0.2 ping statistics --- 00:31:47.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.548 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:47.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:31:47.548 00:31:47.548 --- 10.0.0.1 ping statistics --- 00:31:47.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.548 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4160168 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4160168 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4160168 ']' 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.548 14:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.548 [2024-11-20 14:51:53.697933] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:47.548 [2024-11-20 14:51:53.699088] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:31:47.548 [2024-11-20 14:51:53.699142] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.548 [2024-11-20 14:51:53.793122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.548 [2024-11-20 14:51:53.844374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.548 [2024-11-20 14:51:53.844425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.548 [2024-11-20 14:51:53.844434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:47.548 [2024-11-20 14:51:53.844441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:47.548 [2024-11-20 14:51:53.844447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.548 [2024-11-20 14:51:53.845236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.548 [2024-11-20 14:51:53.922203] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:47.548 [2024-11-20 14:51:53.922492] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.548 [2024-11-20 14:51:54.529779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.548 Malloc0 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:47.548 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.549 [2024-11-20 14:51:54.585968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4160458 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4160458 /var/tmp/bdevperf.sock 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4160458 ']' 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:47.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.549 14:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:47.809 [2024-11-20 14:51:54.627960] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:31:47.809 [2024-11-20 14:51:54.628026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160458 ] 00:31:47.809 [2024-11-20 14:51:54.712406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.809 [2024-11-20 14:51:54.765339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.377 14:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.377 14:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:48.377 14:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.377 14:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.377 14:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.638 NVMe0n1 00:31:48.638 14:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.638 14:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:48.897 Running I/O for 10 seconds... 00:31:50.771 9495.00 IOPS, 37.09 MiB/s [2024-11-20T13:51:58.768Z] 11254.00 IOPS, 43.96 MiB/s [2024-11-20T13:52:00.143Z] 11946.67 IOPS, 46.67 MiB/s [2024-11-20T13:52:01.080Z] 12294.50 IOPS, 48.03 MiB/s [2024-11-20T13:52:02.016Z] 12502.20 IOPS, 48.84 MiB/s [2024-11-20T13:52:02.952Z] 12746.17 IOPS, 49.79 MiB/s [2024-11-20T13:52:03.891Z] 12866.00 IOPS, 50.26 MiB/s [2024-11-20T13:52:04.880Z] 12929.12 IOPS, 50.50 MiB/s [2024-11-20T13:52:05.868Z] 13007.89 IOPS, 50.81 MiB/s [2024-11-20T13:52:05.868Z] 13083.70 IOPS, 51.11 MiB/s 00:31:58.808 Latency(us) 00:31:58.808 [2024-11-20T13:52:05.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.808 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:58.808 Verification LBA range: start 0x0 length 0x4000 00:31:58.808 NVMe0n1 : 10.06 13103.56 51.19 0.00 0.00 77861.13 23811.41 67720.53 00:31:58.808 [2024-11-20T13:52:05.868Z] =================================================================================================================== 00:31:58.808 [2024-11-20T13:52:05.868Z] Total : 13103.56 51.19 0.00 0.00 77861.13 23811.41 67720.53 00:31:58.808 { 00:31:58.808 "results": [ 00:31:58.808 { 00:31:58.808 "job": "NVMe0n1", 00:31:58.808 "core_mask": "0x1", 00:31:58.808 "workload": "verify", 00:31:58.808 "status": "finished", 00:31:58.808 "verify_range": { 00:31:58.808 "start": 0, 00:31:58.808 "length": 16384 00:31:58.808 }, 00:31:58.808 "queue_depth": 1024, 00:31:58.808 "io_size": 4096, 00:31:58.808 "runtime": 10.060318, 00:31:58.808 "iops": 13103.561935119746, 00:31:58.808 "mibps": 51.185788809061506, 00:31:58.808 "io_failed": 0, 00:31:58.808 "io_timeout": 0, 00:31:58.808 "avg_latency_us": 77861.1289135679, 00:31:58.808 "min_latency_us": 23811.413333333334, 00:31:58.808 "max_latency_us": 67720.53333333334 00:31:58.808 } 00:31:58.808 ], 00:31:58.808 "core_count": 1 00:31:58.808 } 00:31:58.808 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4160458 00:31:58.808 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4160458 ']' 00:31:58.808 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4160458 00:31:58.808 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:58.808 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.808 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4160458 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4160458' 00:31:59.068 killing process with pid 4160458 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4160458 00:31:59.068 Received shutdown signal, test time was about 10.000000 seconds 00:31:59.068 00:31:59.068 Latency(us) 00:31:59.068 [2024-11-20T13:52:06.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.068 [2024-11-20T13:52:06.128Z] =================================================================================================================== 00:31:59.068 [2024-11-20T13:52:06.128Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4160458 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.068 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.068 rmmod nvme_tcp 00:31:59.068 rmmod nvme_fabrics 00:31:59.068 rmmod nvme_keyring 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4160168 ']' 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4160168 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4160168 ']' 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4160168 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4160168 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4160168' 00:31:59.068 killing process with pid 4160168 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4160168 00:31:59.068 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4160168 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.329 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.236 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.236 00:32:01.236 real 0m20.452s 00:32:01.236 user 0m23.980s 00:32:01.236 sys 0m5.959s 00:32:01.236 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.236 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:01.236 ************************************ 00:32:01.236 END TEST nvmf_queue_depth 00:32:01.236 ************************************ 00:32:01.236 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:01.236 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:01.236 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.236 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:01.500 ************************************ 00:32:01.500 START TEST nvmf_target_multipath 00:32:01.500 ************************************ 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:01.500 * Looking for test storage... 00:32:01.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:01.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.500 --rc genhtml_branch_coverage=1 00:32:01.500 --rc genhtml_function_coverage=1 00:32:01.500 --rc genhtml_legend=1 00:32:01.500 --rc geninfo_all_blocks=1 00:32:01.500 --rc geninfo_unexecuted_blocks=1 00:32:01.500 00:32:01.500 ' 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:01.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.500 --rc genhtml_branch_coverage=1 00:32:01.500 --rc genhtml_function_coverage=1 00:32:01.500 --rc genhtml_legend=1 00:32:01.500 --rc geninfo_all_blocks=1 00:32:01.500 --rc geninfo_unexecuted_blocks=1 00:32:01.500 00:32:01.500 ' 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:01.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.500 --rc genhtml_branch_coverage=1 00:32:01.500 --rc genhtml_function_coverage=1 00:32:01.500 --rc genhtml_legend=1 00:32:01.500 --rc geninfo_all_blocks=1 00:32:01.500 --rc geninfo_unexecuted_blocks=1 00:32:01.500 00:32:01.500 ' 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:01.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.500 --rc genhtml_branch_coverage=1 00:32:01.500 --rc genhtml_function_coverage=1 00:32:01.500 --rc genhtml_legend=1 00:32:01.500 --rc geninfo_all_blocks=1 00:32:01.500 --rc geninfo_unexecuted_blocks=1 00:32:01.500 00:32:01.500 ' 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.500 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.501 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:06.785 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:06.785 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:06.785 Found net devices under 0000:31:00.0: cvl_0_0 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:06.785 Found net devices under 0000:31:00.1: cvl_0_1 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.785 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.786 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.045 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.045 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.045 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.045 14:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:32:07.045 00:32:07.045 --- 10.0.0.2 ping statistics --- 00:32:07.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.045 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:32:07.045 00:32:07.045 --- 10.0.0.1 ping statistics --- 00:32:07.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.045 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.045 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.046 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.046 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:07.046 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:07.046 only one NIC for nvmf test 00:32:07.046 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:07.046 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:07.046 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:07.306 rmmod nvme_tcp 00:32:07.306 rmmod nvme_fabrics 00:32:07.306 rmmod nvme_keyring 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.306 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.217 00:32:09.217 real 0m7.916s 00:32:09.217 user 0m1.449s 00:32:09.217 sys 0m4.362s 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:09.217 ************************************ 00:32:09.217 END TEST nvmf_target_multipath 00:32:09.217 ************************************ 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.217 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:09.478 ************************************ 00:32:09.478 START TEST nvmf_zcopy 00:32:09.478 ************************************ 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:09.478 * Looking for test storage... 00:32:09.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:09.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.478 --rc genhtml_branch_coverage=1 00:32:09.478 --rc genhtml_function_coverage=1 00:32:09.478 --rc genhtml_legend=1 00:32:09.478 --rc geninfo_all_blocks=1 00:32:09.478 --rc geninfo_unexecuted_blocks=1 00:32:09.478 00:32:09.478 ' 00:32:09.478 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:09.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.478 --rc genhtml_branch_coverage=1 00:32:09.478 --rc genhtml_function_coverage=1 00:32:09.478 --rc genhtml_legend=1 00:32:09.478 --rc geninfo_all_blocks=1 00:32:09.478 --rc geninfo_unexecuted_blocks=1 00:32:09.478 00:32:09.478 ' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.479 --rc genhtml_branch_coverage=1 00:32:09.479 --rc genhtml_function_coverage=1 00:32:09.479 --rc genhtml_legend=1 00:32:09.479 --rc geninfo_all_blocks=1 00:32:09.479 --rc geninfo_unexecuted_blocks=1 00:32:09.479 00:32:09.479 ' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.479 --rc genhtml_branch_coverage=1 00:32:09.479 --rc genhtml_function_coverage=1 00:32:09.479 --rc genhtml_legend=1 00:32:09.479 --rc geninfo_all_blocks=1 00:32:09.479 --rc geninfo_unexecuted_blocks=1 00:32:09.479 00:32:09.479 ' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:09.479 14:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:14.757 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:14.757 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:14.757 Found net devices under 0000:31:00.0: cvl_0_0 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:14.757 Found net devices under 0000:31:00.1: cvl_0_1 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.757 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.018 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.018 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.018 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.018 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:32:15.018 00:32:15.018 --- 10.0.0.2 ping statistics --- 00:32:15.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.018 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:32:15.018 00:32:15.018 --- 10.0.0.1 ping statistics --- 00:32:15.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.018 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4171462 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4171462 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4171462 ']' 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.018 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:15.278 [2024-11-20 14:52:22.089819] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:15.278 [2024-11-20 14:52:22.090976] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:32:15.279 [2024-11-20 14:52:22.091028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.279 [2024-11-20 14:52:22.168800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.279 [2024-11-20 14:52:22.204955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.279 [2024-11-20 14:52:22.204990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.279 [2024-11-20 14:52:22.204996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.279 [2024-11-20 14:52:22.205001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.279 [2024-11-20 14:52:22.205005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.279 [2024-11-20 14:52:22.205622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.279 [2024-11-20 14:52:22.260283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:15.279 [2024-11-20 14:52:22.260473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:15.847 [2024-11-20 14:52:22.902347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.847 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.107 [2024-11-20 14:52:22.918328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.107 malloc0 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:16.107 { 00:32:16.107 "params": { 00:32:16.107 "name": "Nvme$subsystem", 00:32:16.107 "trtype": "$TEST_TRANSPORT", 00:32:16.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:16.107 "adrfam": "ipv4", 00:32:16.107 "trsvcid": "$NVMF_PORT", 00:32:16.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:16.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:16.107 "hdgst": ${hdgst:-false}, 00:32:16.107 "ddgst": ${ddgst:-false} 00:32:16.107 }, 00:32:16.107 "method": "bdev_nvme_attach_controller" 00:32:16.107 } 00:32:16.107 EOF 00:32:16.107 )") 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:16.107 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:16.107 "params": { 00:32:16.107 "name": "Nvme1", 00:32:16.107 "trtype": "tcp", 00:32:16.107 "traddr": "10.0.0.2", 00:32:16.107 "adrfam": "ipv4", 00:32:16.107 "trsvcid": "4420", 00:32:16.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:16.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:16.107 "hdgst": false, 00:32:16.107 "ddgst": false 00:32:16.107 }, 00:32:16.107 "method": "bdev_nvme_attach_controller" 00:32:16.107 }' 00:32:16.107 [2024-11-20 14:52:22.984413] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:32:16.107 [2024-11-20 14:52:22.984462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171808 ] 00:32:16.107 [2024-11-20 14:52:23.061640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.107 [2024-11-20 14:52:23.097623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.677 Running I/O for 10 seconds... 00:32:18.551 9709.00 IOPS, 75.85 MiB/s [2024-11-20T13:52:26.547Z] 9782.50 IOPS, 76.43 MiB/s [2024-11-20T13:52:27.482Z] 9808.67 IOPS, 76.63 MiB/s [2024-11-20T13:52:28.863Z] 9821.25 IOPS, 76.73 MiB/s [2024-11-20T13:52:29.807Z] 9831.20 IOPS, 76.81 MiB/s [2024-11-20T13:52:30.749Z] 9834.67 IOPS, 76.83 MiB/s [2024-11-20T13:52:31.688Z] 9834.43 IOPS, 76.83 MiB/s [2024-11-20T13:52:32.626Z] 9836.75 IOPS, 76.85 MiB/s [2024-11-20T13:52:33.566Z] 9842.33 IOPS, 76.89 MiB/s [2024-11-20T13:52:33.566Z] 9846.90 IOPS, 76.93 MiB/s 00:32:26.506 Latency(us) 00:32:26.506 [2024-11-20T13:52:33.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.506 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:26.506 Verification LBA range: start 0x0 length 0x1000 00:32:26.506 Nvme1n1 : 10.01 9848.89 76.94 0.00 0.00 12952.44 1194.67 20316.16 00:32:26.506 [2024-11-20T13:52:33.566Z] =================================================================================================================== 00:32:26.506 [2024-11-20T13:52:33.566Z] Total : 9848.89 76.94 0.00 0.00 12952.44 1194.67 20316.16 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4174017 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:26.766 { 00:32:26.766 "params": { 00:32:26.766 "name": "Nvme$subsystem", 00:32:26.766 "trtype": "$TEST_TRANSPORT", 00:32:26.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:26.766 "adrfam": "ipv4", 00:32:26.766 "trsvcid": "$NVMF_PORT", 00:32:26.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:26.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:26.766 "hdgst": ${hdgst:-false}, 00:32:26.766 "ddgst": ${ddgst:-false} 00:32:26.766 }, 00:32:26.766 "method": "bdev_nvme_attach_controller" 00:32:26.766 } 00:32:26.766 EOF 00:32:26.766 )") 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:26.766 [2024-11-20 14:52:33.581927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.766 [2024-11-20 14:52:33.581956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:26.766 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:26.766 "params": { 00:32:26.766 "name": "Nvme1", 00:32:26.766 "trtype": "tcp", 00:32:26.766 "traddr": "10.0.0.2", 00:32:26.766 "adrfam": "ipv4", 00:32:26.766 "trsvcid": "4420", 00:32:26.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:26.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:26.766 "hdgst": false, 00:32:26.766 "ddgst": false 00:32:26.766 }, 00:32:26.766 "method": "bdev_nvme_attach_controller" 00:32:26.766 }' 00:32:26.766 [2024-11-20 14:52:33.589900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.766 [2024-11-20 14:52:33.589909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.766 [2024-11-20 14:52:33.597898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.597905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.605898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.605906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.609125] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:32:26.767 [2024-11-20 14:52:33.609171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174017 ] 00:32:26.767 [2024-11-20 14:52:33.613898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.613906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.625898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.625905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.633898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.633907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.641898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.641906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.649898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.649905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.657898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.657906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.665898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.665906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.673709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.767 [2024-11-20 14:52:33.673898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.673905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.681900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.681909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.689898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.689910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.697898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.697908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.703171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.767 [2024-11-20 14:52:33.705898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.705907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.713901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.713908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.721904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.721916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.729900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.729911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.737898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.737908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.745898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.745906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.753899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.753907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.761898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.761904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.769924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.769940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.777918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.777929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.785904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.785916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.793900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.793910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.801901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.801911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.809900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.809910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.767 [2024-11-20 14:52:33.817898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.767 [2024-11-20 14:52:33.817904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.825898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.825906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.833898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.833908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.841898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.841904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.849898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.849905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.857899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.857907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.865898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.865905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.873898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.873905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.881897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.881904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.889898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.889904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.897900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.897909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.905898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.905904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.913898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.913905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.921898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.921904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.929898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.929904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.937897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.937905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.945898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.945906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.953902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.953916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 Running I/O for 5 seconds... 00:32:27.028 [2024-11-20 14:52:33.961901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.961913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.973117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.973132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.978809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.978823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.988356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.988374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:33.997799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:33.997815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.010449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.010464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.022102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.022117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.027902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.027916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.037272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.037286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.042988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.043002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.052363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.052378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.061284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.061299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.067237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.067256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.077077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.077091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.028 [2024-11-20 14:52:34.082880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.028 [2024-11-20 14:52:34.082895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.092524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.092539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.100089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.100104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.108464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.108478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.117824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.117838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.123394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.123409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.132174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.132188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.141743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.141758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.154410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.154428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.166399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.166413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.289 [2024-11-20 14:52:34.178932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.289 [2024-11-20 14:52:34.178947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.189971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.189985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.195783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.195797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.204599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.204614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.210554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.210568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.221142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.221157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.227053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.227067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.237012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.237026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.242767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.242782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.253002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.253016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.258797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.258811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.269208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.269222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.275053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.275067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.284727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.284741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.290504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.290518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.300420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.300434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.309233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.309252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.314860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.314880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.324973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.324987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.330899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.330913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.340606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.340620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.290 [2024-11-20 14:52:34.348658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.290 [2024-11-20 14:52:34.348673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.354612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.354626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.364989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.365004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.370756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.370770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.380621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.380635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.387966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.387980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.397667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.397682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.410312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.410327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.422633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.422647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.434433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.434448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.447259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.447274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.458827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.458841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.470866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.470880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.482958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.482972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.494009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.494023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.500014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.500028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.509623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.509637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.515412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.515426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.525458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.525472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.531297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.531311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.550 [2024-11-20 14:52:34.541027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.550 [2024-11-20 14:52:34.541041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.551 [2024-11-20 14:52:34.546633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.551 [2024-11-20 14:52:34.546646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.551 [2024-11-20 14:52:34.556530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.551 [2024-11-20 14:52:34.556544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.551 [2024-11-20 14:52:34.565399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.551 [2024-11-20 14:52:34.565413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.551 [2024-11-20 14:52:34.571217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.551 [2024-11-20 14:52:34.571231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.551 [2024-11-20 14:52:34.580789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.551 [2024-11-20 14:52:34.580804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.551 [2024-11-20 14:52:34.586498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.551 [2024-11-20 14:52:34.586512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.551 [2024-11-20 14:52:34.596304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.551 [2024-11-20 14:52:34.596318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.551 [2024-11-20 14:52:34.605805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.551 [2024-11-20 14:52:34.605819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.611598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.611612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.620559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.620573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.629429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.629443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.635174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.635187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.644713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.644727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.650652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.650666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.661039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.661053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.666942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.666957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.676780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.676794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.682677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.682691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.692843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.692859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.698594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.698608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.708737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.708752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.714485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.714499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.724847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.724861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.730616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.730629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.740952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.740966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.746521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.746535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.756716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.756731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.762667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.762681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.773047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.773062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.778710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.778724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.789204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.789219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.794982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.794997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.804601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.804616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.813437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.813451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.819186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.819200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.828605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.828619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.834501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.834515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.844697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.844712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.851963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.851976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.860907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.860922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.811 [2024-11-20 14:52:34.866762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:27.811 [2024-11-20 14:52:34.866777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.877118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.877133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.882878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.882892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.892147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.892161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.901878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.901893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.907550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.907564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.917167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.917181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.922954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.922969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.932360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.932374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.941270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.941284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.947050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.947068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.957313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.957328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 19327.00 IOPS, 150.99 MiB/s [2024-11-20T13:52:35.131Z] [2024-11-20 14:52:34.970264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.970279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.982740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.982754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:34.994214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:34.994228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.007049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.007064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.017924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.017939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.023639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.023653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.032460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.032483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.041331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.041346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.047132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.047146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.056692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.056706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.064601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.064615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.070496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.070510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.080995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.081009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.086694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.086708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.096322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.096337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.105710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.105725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.118239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.118258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.071 [2024-11-20 14:52:35.130770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.071 [2024-11-20 14:52:35.130788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.142961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.142976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.155087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.155101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.165499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.165513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.171419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.171433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.181004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.181018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.186665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.186679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.196366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.196381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.205131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.205146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.210704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.210718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.221029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.221043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.226755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.226769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.236523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.236537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.245318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.245332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.251017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.251031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.260452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.260467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.269184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.269199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.275056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.275071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.284913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.284927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.290684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.290702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.300742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.300756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.306631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.306645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.316596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.316610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.325318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.325332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.330890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.330904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.341167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.341182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.347229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.347248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.356576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.356590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.365283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.365297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.371413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.371427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.380620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.380634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.331 [2024-11-20 14:52:35.389814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.331 [2024-11-20 14:52:35.389828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.395544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.395558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.405190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.405205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.411281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.411295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.420709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.420723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.429273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.429288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.435097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.435111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.445166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.445181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.450727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.450740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.461306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.461321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.591 [2024-11-20 14:52:35.467102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.591 [2024-11-20 14:52:35.467116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.476510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.476524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.484056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.484069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.492967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.492981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.498608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.498622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.508734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.508748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.515940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.515954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.526240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.526258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.539133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.539147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.549761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.549774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.562476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.562491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.574239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.574259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.587029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.587043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.598124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.598139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.603942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.603956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.613468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.613482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.619188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.619202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.628671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.628685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.634558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.634572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.592 [2024-11-20 14:52:35.644747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.592 [2024-11-20 14:52:35.644761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.653317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.653332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.659262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.659277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.668307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.668321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.677678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.677692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.683414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.683428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.693430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.693445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.699282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.699296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.708090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.708104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.717644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.717659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.723385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.723399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.733291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.733305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.738841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.738854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.748870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.748884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.754769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.754783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.764778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.764792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.770551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.770565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.781043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.781057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.851 [2024-11-20 14:52:35.786678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.851 [2024-11-20 14:52:35.786692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.797334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.797348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.803050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.803064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.812547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.812561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.821235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.821253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.826943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.826957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.837037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.837051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.842571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.842585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.852129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.852143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.862857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.862872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.873986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.874000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.880158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.880172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.889159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.889174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.894891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.894905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:28.852 [2024-11-20 14:52:35.904579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:28.852 [2024-11-20 14:52:35.904593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.913316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.913331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.918859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.918873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.928398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.928412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.936942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.936956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.942789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.942803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.952678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.952692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.958565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.958580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 19362.50 IOPS, 151.27 MiB/s [2024-11-20T13:52:36.171Z] [2024-11-20 14:52:35.968700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.968714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.976720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.976734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.983545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.983559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:35.994563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:35.994577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.006603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.006617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.018635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.018649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.031057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.031071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.041955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.041970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.047645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.047658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.057208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.057222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.062942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.062956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.072420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.072442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.078459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.078473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.088647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.088664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.094473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.094487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.104985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.105000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.110781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.110795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.120478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.120492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.129429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.129443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.135447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.135461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.145416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.145431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.151225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.151239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.160740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.160754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.111 [2024-11-20 14:52:36.166593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.111 [2024-11-20 14:52:36.166607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.176847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.176862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.182585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.182598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.192792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.192806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.198536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.198550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.208115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.208129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.217478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.217492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.223170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.223184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.233030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.233044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.238918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.238935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.248606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.248620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.254478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.254492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.264813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.264827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.270564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.270578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.280862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.280876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.286679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.286694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.296607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.296622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.302497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.302511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.312789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.312804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.318737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.318752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.328848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.328863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.336678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.336693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.345028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.345042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.350769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.350783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.360836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.360851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.366560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.366574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.376282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.376296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.385505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.385520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.391256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.391274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.400957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.400972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.406701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.406715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.416540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.416555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.372 [2024-11-20 14:52:36.422944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.372 [2024-11-20 14:52:36.422958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.434971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.434986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.446180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.446193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.458891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.458905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.470975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.470990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.482859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.482874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.494834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.494849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.506384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.506399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.518729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.518744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.531252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.531267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.542956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.542971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.555171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.555186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.566282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.566297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.579241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.579260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.589998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.590012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.595930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.595944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.605359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.605374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.611379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.611394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.620876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.620891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.626654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.626668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.636655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.636669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.643910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.643924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.653406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.653421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.665984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.665998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.672357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.672371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.680925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.680939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.634 [2024-11-20 14:52:36.686666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.634 [2024-11-20 14:52:36.686680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.895 [2024-11-20 14:52:36.697189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.895 [2024-11-20 14:52:36.697204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.895 [2024-11-20 14:52:36.703039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.895 [2024-11-20 14:52:36.703054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.895 [2024-11-20 14:52:36.712875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.895 [2024-11-20 14:52:36.712890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.718546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.718561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.728943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.728960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.734828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.734843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.745202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.745218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.750965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.750979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.761479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.761493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.767213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.767228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.776718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.776732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.782588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.782602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.792448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.792462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.801284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.801298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.806894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.806908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.816619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.816633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.825403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.825417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.831062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.831076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.840489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.840503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.846476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.846489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.856492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.856506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.865237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.865255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.870815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.870828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.880298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.880312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.889232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.889250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.895167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.895181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.905203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.905217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.910843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.910857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.920313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.920327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.928688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.928702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.936068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.936082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.945552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.945566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:29.896 [2024-11-20 14:52:36.951362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:29.896 [2024-11-20 14:52:36.951376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.156 [2024-11-20 14:52:36.960965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.156 [2024-11-20 14:52:36.960979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.156 [2024-11-20 14:52:36.966870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.156 [2024-11-20 14:52:36.966884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.156 19366.67 IOPS, 151.30 MiB/s [2024-11-20T13:52:37.216Z] [2024-11-20 14:52:36.976433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.156 [2024-11-20 14:52:36.976448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.156 [2024-11-20 14:52:36.985285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.156 [2024-11-20 14:52:36.985299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.156 [2024-11-20 14:52:36.991275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.156 [2024-11-20 14:52:36.991289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.156 [2024-11-20 14:52:37.001071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.156 [2024-11-20 14:52:37.001084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.156 [2024-11-20 14:52:37.006783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.156 [2024-11-20 14:52:37.006796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.156 [2024-11-20 14:52:37.017218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.156 [2024-11-20 14:52:37.017232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.156 [2024-11-20 14:52:37.022999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.156 [2024-11-20 14:52:37.023013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.032301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.032315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.041180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.041194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.047198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.047216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.056537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.056552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.065264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.065278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.070825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.070838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.080588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.080602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.089384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.089398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.095022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.095036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.104424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.104438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.113757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.113772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.126344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.126358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.138733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.138747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.150794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.150808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.163193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.163207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.172227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.172240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.181329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.181343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.187201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.187215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.196753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.196767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.205432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.205446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.157 [2024-11-20 14:52:37.211231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.157 [2024-11-20 14:52:37.211249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.416 [2024-11-20 14:52:37.220635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.416 [2024-11-20 14:52:37.220654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.416 [2024-11-20 14:52:37.227967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.416 [2024-11-20 14:52:37.227981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.416 [2024-11-20 14:52:37.238226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.416 [2024-11-20 14:52:37.238240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.416 [2024-11-20 14:52:37.251006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.416 [2024-11-20 14:52:37.251020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.263211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.263225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.273856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.273870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.279800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.279815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.288562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.288576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.297329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.297343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.303313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.303327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.312040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.312054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.321605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.321619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.327192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.327206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.336266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.336280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.345630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.345644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.351491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.351505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.361255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.361269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.367375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.367389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.377395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.377409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.383157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.383174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.392881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.392895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.399103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.399117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.408024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.408038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.418694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.418708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.430846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.430861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.443348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.443362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.453891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.453906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.459629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.459643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.417 [2024-11-20 14:52:37.468296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.417 [2024-11-20 14:52:37.468311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.477737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.477751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.490461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.490475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.502784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.502798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.514087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.514102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.519802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.519816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.529386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.529401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.535151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.535166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.544627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.544641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.550500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.550514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.560588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.560607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.569197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.569212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.574800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.574814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.584382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.584396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.593737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.593751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.606436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.606450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.618978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.618994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.631066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.631081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.642023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.642037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.648295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.648309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.657201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.657216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.662982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.662996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.672987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.673002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.678767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.678781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.688524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.688539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.697392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.697407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.703161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.703176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.713254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.713269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.718850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.718865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.728668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.728683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.678 [2024-11-20 14:52:37.734678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.678 [2024-11-20 14:52:37.734692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.938 [2024-11-20 14:52:37.745105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.938 [2024-11-20 14:52:37.745121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.938 [2024-11-20 14:52:37.751001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.938 [2024-11-20 14:52:37.751016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.938 [2024-11-20 14:52:37.760429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.938 [2024-11-20 14:52:37.760443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.938 [2024-11-20 14:52:37.769012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.938 [2024-11-20 14:52:37.769027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.938 [2024-11-20 14:52:37.781487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.938 [2024-11-20 14:52:37.781502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.938 [2024-11-20 14:52:37.787909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.938 [2024-11-20 14:52:37.787923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.796856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.796871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.802676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.802690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.812773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.812788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.818644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.818659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.828954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.828969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.834792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.834807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.844399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.844414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.853164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.853179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.858760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.858775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.868648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.868662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.874519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.874533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.884839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.884854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.890770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.890785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.900393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.900407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.909817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.909832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.915651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.915666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.924319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.924334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.933751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.933766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.946355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.946369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.959005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.959020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.969927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.969942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 19382.75 IOPS, 151.43 MiB/s [2024-11-20T13:52:37.999Z] [2024-11-20 14:52:37.976065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.976080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.984350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.984365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:30.939 [2024-11-20 14:52:37.993709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:30.939 [2024-11-20 14:52:37.993724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.199 [2024-11-20 14:52:38.006384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.199 [2024-11-20 14:52:38.006399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.199 [2024-11-20 14:52:38.019097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.199 [2024-11-20 14:52:38.019112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.199 [2024-11-20 14:52:38.030054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.199 [2024-11-20 14:52:38.030069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.199 [2024-11-20 14:52:38.035848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.199 [2024-11-20 14:52:38.035862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.199 [2024-11-20 14:52:38.044680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.199 [2024-11-20 14:52:38.044694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.199 [2024-11-20 14:52:38.050715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.199 [2024-11-20 14:52:38.050733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.199 [2024-11-20 14:52:38.060686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.199 [2024-11-20 14:52:38.060700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.199 [2024-11-20 14:52:38.069219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.199 [2024-11-20 14:52:38.069233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.199 [2024-11-20 14:52:38.075027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.199 [2024-11-20 14:52:38.075041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.084677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.084691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.090640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.090654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.100203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.100217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.109742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.109756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.122545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.122559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.134681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.134696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.146990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.147004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.158805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.158819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.171498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.171513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.182217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.182231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.194975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.194989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.205636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.205651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.218337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.218352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.231142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.231156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.242076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.242090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.247974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.247998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.200 [2024-11-20 14:52:38.256631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.200 [2024-11-20 14:52:38.256646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.262547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.262561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.272559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.272573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.279964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.279978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.289755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.289770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.295565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.295580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.304383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.304397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.313736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.313751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.326228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.326242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.338717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.338732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.350771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.350785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.362981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.362996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.373309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.373323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.379128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.379142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.388692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.388706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.395956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.395969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.404205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.404219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.413633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.413647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.419393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.419411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.429228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.429242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.434847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.434860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.444902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.444917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.450666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.450679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.461113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.461127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.466835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.466849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.477132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.477146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.482936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.482949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.493026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.493041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.498797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.498812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.508211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.508225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.460 [2024-11-20 14:52:38.517591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.460 [2024-11-20 14:52:38.517605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.523270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.523284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.532073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.532087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.541407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.541421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.547103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.547117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.557527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.557542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.563470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.563484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.572847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.572865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.581385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.581399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.587126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.587141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.596397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.596411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.605749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.605763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.618324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.618338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.630689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.630703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.642626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.642640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.654975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.654989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.667190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.667204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.676940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.676954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.682576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.682590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.693131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.693146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.699013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.699027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.708520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.720 [2024-11-20 14:52:38.708535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.720 [2024-11-20 14:52:38.717193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.721 [2024-11-20 14:52:38.717207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.721 [2024-11-20 14:52:38.722967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.721 [2024-11-20 14:52:38.722981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.721 [2024-11-20 14:52:38.732452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.721 [2024-11-20 14:52:38.732467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.721 [2024-11-20 14:52:38.741181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.721 [2024-11-20 14:52:38.741195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.721 [2024-11-20 14:52:38.746895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.721 [2024-11-20 14:52:38.746913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.721 [2024-11-20 14:52:38.756417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.721 [2024-11-20 14:52:38.756432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.721 [2024-11-20 14:52:38.765313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.721 [2024-11-20 14:52:38.765329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.721 [2024-11-20 14:52:38.770942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.721 [2024-11-20 14:52:38.770956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.981 [2024-11-20 14:52:38.780865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.981 [2024-11-20 14:52:38.780879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.981 [2024-11-20 14:52:38.787397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.981 [2024-11-20 14:52:38.787411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.981 [2024-11-20 14:52:38.796953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.981 [2024-11-20 14:52:38.796966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.981 [2024-11-20 14:52:38.802594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.981 [2024-11-20 14:52:38.802608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.981 [2024-11-20 14:52:38.813347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.981 [2024-11-20 14:52:38.813361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.981 [2024-11-20 14:52:38.819330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.819345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.828952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.828967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.834759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.834774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.844944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.844958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.850859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.850873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.860226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.860241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.869644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.869659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.882225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.882238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.894948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.894962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.907109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.907123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.916091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.916105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.925633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.925647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.938372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.938386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.950925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.950939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.962857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.962871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.974478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.974492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 19391.60 IOPS, 151.50 MiB/s 00:32:31.982 Latency(us) 00:32:31.982 [2024-11-20T13:52:39.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.982 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:31.982 Nvme1n1 : 5.01 19395.05 151.52 0.00 0.00 6594.36 2321.07 11195.73 00:32:31.982 [2024-11-20T13:52:39.042Z] =================================================================================================================== 00:32:31.982 [2024-11-20T13:52:39.042Z] Total : 19395.05 151.52 0.00 0.00 6594.36 2321.07 11195.73 00:32:31.982 [2024-11-20 14:52:38.981904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.981918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.989903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.989916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:38.997902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:38.997911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:39.005903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:39.005914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:39.013902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:39.013911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:39.021901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:39.021909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:39.029899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:39.029908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:31.982 [2024-11-20 14:52:39.037899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:31.982 [2024-11-20 14:52:39.037908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:32.253 [2024-11-20 14:52:39.045898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:32.253 [2024-11-20 14:52:39.045908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:32.253 [2024-11-20 14:52:39.053898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:32.253 [2024-11-20 14:52:39.053906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:32.253 [2024-11-20 14:52:39.061899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:32.253 [2024-11-20 14:52:39.061907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:32.253 [2024-11-20 14:52:39.069899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:32.253 [2024-11-20 14:52:39.069908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:32.253 [2024-11-20 14:52:39.077899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:32.253 [2024-11-20 14:52:39.077906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:32.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4174017) - No such process 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4174017 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:32.253 delay0 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:32.253 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.254 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:32.254 [2024-11-20 14:52:39.190866] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:38.832 Initializing NVMe Controllers 00:32:38.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:38.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:38.832 Initialization complete. Launching workers. 00:32:38.832 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 169 00:32:38.832 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 456, failed to submit 33 00:32:38.832 success 235, unsuccessful 221, failed 0 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.832 rmmod nvme_tcp 00:32:38.832 rmmod nvme_fabrics 00:32:38.832 rmmod nvme_keyring 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4171462 ']' 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4171462 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4171462 ']' 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4171462 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4171462 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4171462' 00:32:38.832 killing process with pid 4171462 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4171462 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4171462 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:38.832 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.833 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.833 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.737 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:40.737 00:32:40.737 real 0m31.494s 00:32:40.737 user 0m42.133s 00:32:40.737 sys 0m10.059s 00:32:40.737 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.737 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:40.737 ************************************ 00:32:40.737 END TEST nvmf_zcopy 00:32:40.737 ************************************ 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:40.997 ************************************ 00:32:40.997 START TEST nvmf_nmic 00:32:40.997 ************************************ 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:40.997 * Looking for test storage... 00:32:40.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:40.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.997 --rc genhtml_branch_coverage=1 00:32:40.997 --rc genhtml_function_coverage=1 00:32:40.997 --rc genhtml_legend=1 00:32:40.997 --rc geninfo_all_blocks=1 00:32:40.997 --rc geninfo_unexecuted_blocks=1 00:32:40.997 00:32:40.997 ' 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:40.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.997 --rc genhtml_branch_coverage=1 00:32:40.997 --rc genhtml_function_coverage=1 00:32:40.997 --rc genhtml_legend=1 00:32:40.997 --rc geninfo_all_blocks=1 00:32:40.997 --rc geninfo_unexecuted_blocks=1 00:32:40.997 00:32:40.997 ' 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:40.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.997 --rc genhtml_branch_coverage=1 00:32:40.997 --rc genhtml_function_coverage=1 00:32:40.997 --rc genhtml_legend=1 00:32:40.997 --rc geninfo_all_blocks=1 00:32:40.997 --rc geninfo_unexecuted_blocks=1 00:32:40.997 00:32:40.997 ' 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:40.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.997 --rc genhtml_branch_coverage=1 00:32:40.997 --rc genhtml_function_coverage=1 00:32:40.997 --rc genhtml_legend=1 00:32:40.997 --rc geninfo_all_blocks=1 00:32:40.997 --rc geninfo_unexecuted_blocks=1 00:32:40.997 00:32:40.997 ' 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.997 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:40.998 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:46.294 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:46.294 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.294 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:46.295 Found net devices under 0000:31:00.0: cvl_0_0 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:46.295 Found net devices under 0000:31:00.1: cvl_0_1 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.295 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:32:46.295 00:32:46.295 --- 10.0.0.2 ping statistics --- 00:32:46.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.295 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:32:46.295 00:32:46.295 --- 10.0.0.1 ping statistics --- 00:32:46.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.295 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4180801 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4180801 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4180801 ']' 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.295 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:46.295 [2024-11-20 14:52:53.301144] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:46.295 [2024-11-20 14:52:53.302164] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:32:46.295 [2024-11-20 14:52:53.302204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.554 [2024-11-20 14:52:53.389531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:46.554 [2024-11-20 14:52:53.443738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.554 [2024-11-20 14:52:53.443795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.554 [2024-11-20 14:52:53.443808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.554 [2024-11-20 14:52:53.443815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.554 [2024-11-20 14:52:53.443821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.554 [2024-11-20 14:52:53.445924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.554 [2024-11-20 14:52:53.446092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.554 [2024-11-20 14:52:53.446277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:46.554 [2024-11-20 14:52:53.446278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.554 [2024-11-20 14:52:53.525301] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:46.554 [2024-11-20 14:52:53.525640] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:46.554 [2024-11-20 14:52:53.526354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:46.554 [2024-11-20 14:52:53.526472] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:46.554 [2024-11-20 14:52:53.526504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.123 [2024-11-20 14:52:54.111392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.123 Malloc0 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.123 [2024-11-20 14:52:54.171218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:47.123 test case1: single bdev can't be used in multiple subsystems 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.123 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.383 [2024-11-20 14:52:54.195015] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:47.383 [2024-11-20 14:52:54.195034] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:47.383 [2024-11-20 14:52:54.195042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.383 request: 00:32:47.383 { 00:32:47.383 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:47.383 "namespace": { 00:32:47.383 "bdev_name": "Malloc0", 00:32:47.383 "no_auto_visible": false, 00:32:47.383 "hide_metadata": false 00:32:47.383 }, 00:32:47.383 "method": "nvmf_subsystem_add_ns", 00:32:47.383 "req_id": 1 00:32:47.383 } 00:32:47.383 Got JSON-RPC error response 00:32:47.383 response: 00:32:47.383 { 00:32:47.383 "code": -32602, 00:32:47.383 "message": "Invalid parameters" 00:32:47.383 } 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:47.383 Adding namespace failed - expected result. 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:47.383 test case2: host connect to nvmf target in multiple paths 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:47.383 [2024-11-20 14:52:54.203099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.383 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:47.644 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:47.903 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:47.903 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:47.903 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:47.903 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:47.903 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:50.485 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:50.485 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:50.485 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:50.485 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:50.485 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:50.485 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:50.485 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:50.485 [global] 00:32:50.485 thread=1 00:32:50.485 invalidate=1 00:32:50.485 rw=write 00:32:50.485 time_based=1 00:32:50.485 runtime=1 00:32:50.485 ioengine=libaio 00:32:50.485 direct=1 00:32:50.485 bs=4096 00:32:50.485 iodepth=1 00:32:50.485 norandommap=0 00:32:50.485 numjobs=1 00:32:50.485 00:32:50.485 verify_dump=1 00:32:50.485 verify_backlog=512 00:32:50.485 verify_state_save=0 00:32:50.485 do_verify=1 00:32:50.485 verify=crc32c-intel 00:32:50.485 [job0] 00:32:50.485 filename=/dev/nvme0n1 00:32:50.485 Could not set queue depth (nvme0n1) 00:32:50.485 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:50.485 fio-3.35 00:32:50.485 Starting 1 thread 00:32:51.448 00:32:51.448 job0: (groupid=0, jobs=1): err= 0: pid=4181997: Wed Nov 20 14:52:58 2024 00:32:51.448 read: IOPS=56, BW=226KiB/s (231kB/s)(228KiB/1011msec) 00:32:51.448 slat (nsec): min=8081, max=27825, avg=22880.95, stdev=6720.65 00:32:51.448 clat (usec): min=318, max=42511, avg=14294.41, stdev=19658.97 00:32:51.448 lat (usec): min=345, max=42522, avg=14317.29, stdev=19659.46 00:32:51.448 clat percentiles (usec): 00:32:51.448 | 1.00th=[ 318], 5.00th=[ 412], 10.00th=[ 445], 20.00th=[ 502], 00:32:51.448 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 578], 00:32:51.448 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:51.448 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:51.448 | 99.99th=[42730] 00:32:51.448 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:32:51.448 slat (usec): min=4, max=30659, avg=81.06, stdev=1354.08 00:32:51.448 clat (usec): min=94, max=466, avg=292.50, stdev=66.56 00:32:51.448 lat (usec): min=106, max=30893, avg=373.56, stdev=1353.32 00:32:51.448 clat percentiles (usec): 00:32:51.448 | 1.00th=[ 113], 5.00th=[ 186], 10.00th=[ 212], 20.00th=[ 235], 00:32:51.448 | 30.00th=[ 251], 40.00th=[ 281], 50.00th=[ 302], 60.00th=[ 318], 00:32:51.448 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 388], 00:32:51.448 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 465], 99.95th=[ 465], 00:32:51.448 | 99.99th=[ 465] 00:32:51.448 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:51.448 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:51.448 lat (usec) : 100=0.53%, 250=26.01%, 500=65.20%, 750=4.92% 00:32:51.448 lat (msec) : 50=3.34% 00:32:51.448 cpu : usr=0.59%, sys=1.09%, ctx=573, majf=0, minf=1 00:32:51.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:51.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.448 issued rwts: total=57,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:51.448 00:32:51.448 Run status group 0 (all jobs): 00:32:51.448 READ: bw=226KiB/s (231kB/s), 226KiB/s-226KiB/s (231kB/s-231kB/s), io=228KiB (233kB), run=1011-1011msec 00:32:51.448 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:32:51.448 00:32:51.448 Disk stats (read/write): 00:32:51.448 nvme0n1: ios=79/512, merge=0/0, ticks=1653/142, in_queue=1795, util=98.70% 00:32:51.449 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:51.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:51.708 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:51.708 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:51.708 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:51.708 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:51.708 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:51.708 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:51.708 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.709 rmmod nvme_tcp 00:32:51.709 rmmod nvme_fabrics 00:32:51.709 rmmod nvme_keyring 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4180801 ']' 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4180801 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4180801 ']' 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4180801 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4180801 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4180801' 00:32:51.709 killing process with pid 4180801 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4180801 00:32:51.709 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4180801 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.969 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.888 00:32:53.888 real 0m12.991s 00:32:53.888 user 0m30.888s 00:32:53.888 sys 0m5.544s 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:53.888 ************************************ 00:32:53.888 END TEST nvmf_nmic 00:32:53.888 ************************************ 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:53.888 ************************************ 00:32:53.888 START TEST nvmf_fio_target 00:32:53.888 ************************************ 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:53.888 * Looking for test storage... 00:32:53.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:53.888 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:54.148 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:54.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.149 --rc genhtml_branch_coverage=1 00:32:54.149 --rc genhtml_function_coverage=1 00:32:54.149 --rc genhtml_legend=1 00:32:54.149 --rc geninfo_all_blocks=1 00:32:54.149 --rc geninfo_unexecuted_blocks=1 00:32:54.149 00:32:54.149 ' 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:54.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.149 --rc genhtml_branch_coverage=1 00:32:54.149 --rc genhtml_function_coverage=1 00:32:54.149 --rc genhtml_legend=1 00:32:54.149 --rc geninfo_all_blocks=1 00:32:54.149 --rc geninfo_unexecuted_blocks=1 00:32:54.149 00:32:54.149 ' 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:54.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.149 --rc genhtml_branch_coverage=1 00:32:54.149 --rc genhtml_function_coverage=1 00:32:54.149 --rc genhtml_legend=1 00:32:54.149 --rc geninfo_all_blocks=1 00:32:54.149 --rc geninfo_unexecuted_blocks=1 00:32:54.149 00:32:54.149 ' 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:54.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.149 --rc genhtml_branch_coverage=1 00:32:54.149 --rc genhtml_function_coverage=1 00:32:54.149 --rc genhtml_legend=1 00:32:54.149 --rc geninfo_all_blocks=1 00:32:54.149 --rc geninfo_unexecuted_blocks=1 00:32:54.149 00:32:54.149 ' 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.149 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:54.149 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:54.149 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.149 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:54.150 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:59.427 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:59.427 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:59.427 Found net devices under 0000:31:00.0: cvl_0_0 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.427 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:59.428 Found net devices under 0000:31:00.1: cvl_0_1 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:59.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:32:59.428 00:32:59.428 --- 10.0.0.2 ping statistics --- 00:32:59.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.428 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:32:59.428 00:32:59.428 --- 10.0.0.1 ping statistics --- 00:32:59.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.428 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4186621 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4186621 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4186621 ']' 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.428 14:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:59.428 [2024-11-20 14:53:06.383934] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:59.428 [2024-11-20 14:53:06.384924] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:32:59.428 [2024-11-20 14:53:06.384962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.428 [2024-11-20 14:53:06.471072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:59.687 [2024-11-20 14:53:06.507906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.687 [2024-11-20 14:53:06.507939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.687 [2024-11-20 14:53:06.507948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:59.687 [2024-11-20 14:53:06.507954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:59.688 [2024-11-20 14:53:06.507961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.688 [2024-11-20 14:53:06.509718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.688 [2024-11-20 14:53:06.509869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:59.688 [2024-11-20 14:53:06.510018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.688 [2024-11-20 14:53:06.510019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:59.688 [2024-11-20 14:53:06.566921] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:59.688 [2024-11-20 14:53:06.568096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:59.688 [2024-11-20 14:53:06.568139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:59.688 [2024-11-20 14:53:06.568179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:59.688 [2024-11-20 14:53:06.568192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:00.256 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.256 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:00.256 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:00.256 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.256 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:00.256 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.256 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:00.516 [2024-11-20 14:53:07.338766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.516 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:00.516 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:00.516 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:00.776 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:00.776 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:01.036 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:01.036 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:01.036 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:01.036 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:01.295 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:01.554 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:01.554 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:01.554 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:01.554 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:01.814 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:01.814 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:01.814 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:02.073 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:02.073 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:02.342 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:02.342 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:02.342 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.604 [2024-11-20 14:53:09.522769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.604 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:02.864 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:02.864 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:03.123 14:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:03.123 14:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:03.123 14:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:03.123 14:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:03.383 14:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:03.383 14:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:33:05.290 14:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:05.290 14:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:05.290 14:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:05.290 14:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:33:05.290 14:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:05.290 14:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:33:05.290 14:53:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:05.290 [global] 00:33:05.290 thread=1 00:33:05.290 invalidate=1 00:33:05.290 rw=write 00:33:05.290 time_based=1 00:33:05.290 runtime=1 00:33:05.290 ioengine=libaio 00:33:05.290 direct=1 00:33:05.290 bs=4096 00:33:05.290 iodepth=1 00:33:05.290 norandommap=0 00:33:05.290 numjobs=1 00:33:05.290 00:33:05.290 verify_dump=1 00:33:05.290 verify_backlog=512 00:33:05.290 verify_state_save=0 00:33:05.290 do_verify=1 00:33:05.290 verify=crc32c-intel 00:33:05.290 [job0] 00:33:05.290 filename=/dev/nvme0n1 00:33:05.290 [job1] 00:33:05.290 filename=/dev/nvme0n2 00:33:05.290 [job2] 00:33:05.290 filename=/dev/nvme0n3 00:33:05.290 [job3] 00:33:05.290 filename=/dev/nvme0n4 00:33:05.290 Could not set queue depth (nvme0n1) 00:33:05.290 Could not set queue depth (nvme0n2) 00:33:05.290 Could not set queue depth (nvme0n3) 00:33:05.290 Could not set queue depth (nvme0n4) 00:33:05.549 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:05.549 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:05.549 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:05.549 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:05.549 fio-3.35 00:33:05.549 Starting 4 threads 00:33:06.933 00:33:06.933 job0: (groupid=0, jobs=1): err= 0: pid=4188097: Wed Nov 20 14:53:13 2024 00:33:06.933 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:06.933 slat (nsec): min=10883, max=62005, avg=17062.92, stdev=5109.77 00:33:06.933 clat (usec): min=424, max=1232, avg=976.85, stdev=98.23 00:33:06.933 lat (usec): min=437, max=1244, avg=993.91, stdev=98.38 00:33:06.933 clat percentiles (usec): 00:33:06.933 | 1.00th=[ 660], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 914], 00:33:06.933 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 1004], 00:33:06.933 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:33:06.933 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1237], 99.95th=[ 1237], 00:33:06.933 | 99.99th=[ 1237] 00:33:06.933 write: IOPS=855, BW=3421KiB/s (3503kB/s)(3424KiB/1001msec); 0 zone resets 00:33:06.933 slat (nsec): min=3520, max=53726, avg=15192.99, stdev=8128.52 00:33:06.933 clat (usec): min=181, max=895, avg=550.53, stdev=116.93 00:33:06.933 lat (usec): min=188, max=909, avg=565.73, stdev=118.60 00:33:06.933 clat percentiles (usec): 00:33:06.933 | 1.00th=[ 255], 5.00th=[ 338], 10.00th=[ 400], 20.00th=[ 457], 00:33:06.933 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:33:06.933 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 734], 00:33:06.933 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[ 898], 99.95th=[ 898], 00:33:06.933 | 99.99th=[ 898] 00:33:06.933 bw ( KiB/s): min= 4096, max= 4096, per=38.53%, avg=4096.00, stdev= 0.00, samples=1 00:33:06.933 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:06.933 lat (usec) : 250=0.51%, 500=19.44%, 750=41.74%, 1000=22.66% 00:33:06.933 lat (msec) : 2=15.64% 00:33:06.933 cpu : usr=1.50%, sys=3.40%, ctx=1369, majf=0, minf=1 00:33:06.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:06.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.933 issued rwts: total=512,856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:06.933 job1: (groupid=0, jobs=1): err= 0: pid=4188104: Wed Nov 20 14:53:13 2024 00:33:06.933 read: IOPS=18, BW=74.2KiB/s (76.0kB/s)(76.0KiB/1024msec) 00:33:06.933 slat (nsec): min=25306, max=26141, avg=25643.84, stdev=215.87 00:33:06.933 clat (usec): min=40786, max=41973, avg=41110.83, stdev=374.05 00:33:06.933 lat (usec): min=40812, max=41999, avg=41136.48, stdev=374.08 00:33:06.933 clat percentiles (usec): 00:33:06.933 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:06.933 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:06.933 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:33:06.933 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:06.933 | 99.99th=[42206] 00:33:06.933 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:33:06.933 slat (nsec): min=9755, max=51053, avg=28337.51, stdev=9447.21 00:33:06.933 clat (usec): min=210, max=691, avg=437.47, stdev=84.76 00:33:06.933 lat (usec): min=239, max=702, avg=465.81, stdev=88.62 00:33:06.933 clat percentiles (usec): 00:33:06.933 | 1.00th=[ 247], 5.00th=[ 293], 10.00th=[ 318], 20.00th=[ 355], 00:33:06.933 | 30.00th=[ 388], 40.00th=[ 429], 50.00th=[ 457], 60.00th=[ 474], 00:33:06.933 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 562], 00:33:06.933 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 693], 99.95th=[ 693], 00:33:06.933 | 99.99th=[ 693] 00:33:06.933 bw ( KiB/s): min= 4096, max= 4096, per=38.53%, avg=4096.00, stdev= 0.00, samples=1 00:33:06.933 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:06.933 lat (usec) : 250=1.13%, 500=72.69%, 750=22.60% 00:33:06.933 lat (msec) : 50=3.58% 00:33:06.933 cpu : usr=0.49%, sys=1.56%, ctx=531, majf=0, minf=2 00:33:06.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:06.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.933 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:06.933 job2: (groupid=0, jobs=1): err= 0: pid=4188126: Wed Nov 20 14:53:13 2024 00:33:06.933 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:33:06.933 slat (nsec): min=11064, max=27102, avg=24825.61, stdev=4968.08 00:33:06.933 clat (usec): min=1115, max=42062, avg=39657.26, stdev=9620.01 00:33:06.933 lat (usec): min=1126, max=42089, avg=39682.09, stdev=9623.51 00:33:06.933 clat percentiles (usec): 00:33:06.933 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41681], 00:33:06.933 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:06.933 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:06.933 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:06.933 | 99.99th=[42206] 00:33:06.933 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:33:06.933 slat (nsec): min=4045, max=29870, avg=12670.72, stdev=4420.25 00:33:06.933 clat (usec): min=232, max=1025, avg=616.92, stdev=137.81 00:33:06.933 lat (usec): min=236, max=1036, avg=629.59, stdev=138.69 00:33:06.933 clat percentiles (usec): 00:33:06.933 | 1.00th=[ 306], 5.00th=[ 375], 10.00th=[ 441], 20.00th=[ 494], 00:33:06.933 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 660], 00:33:06.933 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 832], 00:33:06.933 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1029], 00:33:06.933 | 99.99th=[ 1029] 00:33:06.933 bw ( KiB/s): min= 4096, max= 4096, per=38.53%, avg=4096.00, stdev= 0.00, samples=1 00:33:06.933 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:06.933 lat (usec) : 250=0.19%, 500=21.32%, 750=58.30%, 1000=16.60% 00:33:06.933 lat (msec) : 2=0.38%, 50=3.21% 00:33:06.933 cpu : usr=0.00%, sys=0.87%, ctx=531, majf=0, minf=1 00:33:06.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:06.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.933 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:06.933 job3: (groupid=0, jobs=1): err= 0: pid=4188135: Wed Nov 20 14:53:13 2024 00:33:06.933 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:06.933 slat (nsec): min=3547, max=46459, avg=16471.88, stdev=4539.61 00:33:06.933 clat (usec): min=198, max=1172, avg=934.00, stdev=85.52 00:33:06.933 lat (usec): min=201, max=1187, avg=950.47, stdev=85.86 00:33:06.933 clat percentiles (usec): 00:33:06.933 | 1.00th=[ 717], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 881], 00:33:06.933 | 30.00th=[ 906], 40.00th=[ 922], 50.00th=[ 947], 60.00th=[ 963], 00:33:06.933 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1057], 00:33:06.933 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:33:06.933 | 99.99th=[ 1172] 00:33:06.933 write: IOPS=883, BW=3532KiB/s (3617kB/s)(3536KiB/1001msec); 0 zone resets 00:33:06.933 slat (nsec): min=3615, max=54795, avg=15905.98, stdev=8700.86 00:33:06.933 clat (usec): min=199, max=912, avg=556.97, stdev=112.83 00:33:06.933 lat (usec): min=214, max=928, avg=572.87, stdev=115.51 00:33:06.933 clat percentiles (usec): 00:33:06.933 | 1.00th=[ 297], 5.00th=[ 338], 10.00th=[ 408], 20.00th=[ 465], 00:33:06.933 | 30.00th=[ 506], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 586], 00:33:06.933 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 742], 00:33:06.933 | 99.00th=[ 799], 99.50th=[ 832], 99.90th=[ 914], 99.95th=[ 914], 00:33:06.933 | 99.99th=[ 914] 00:33:06.933 bw ( KiB/s): min= 4096, max= 4096, per=38.53%, avg=4096.00, stdev= 0.00, samples=1 00:33:06.933 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:06.933 lat (usec) : 250=0.14%, 500=18.12%, 750=43.84%, 1000=31.59% 00:33:06.933 lat (msec) : 2=6.30% 00:33:06.933 cpu : usr=2.00%, sys=3.00%, ctx=1399, majf=0, minf=1 00:33:06.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:06.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.933 issued rwts: total=512,884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:06.933 00:33:06.933 Run status group 0 (all jobs): 00:33:06.933 READ: bw=4081KiB/s (4179kB/s), 69.2KiB/s-2046KiB/s (70.9kB/s-2095kB/s), io=4244KiB (4346kB), run=1001-1040msec 00:33:06.933 WRITE: bw=10.4MiB/s (10.9MB/s), 1969KiB/s-3532KiB/s (2016kB/s-3617kB/s), io=10.8MiB (11.3MB), run=1001-1040msec 00:33:06.933 00:33:06.933 Disk stats (read/write): 00:33:06.934 nvme0n1: ios=535/554, merge=0/0, ticks=1329/239, in_queue=1568, util=83.87% 00:33:06.934 nvme0n2: ios=64/512, merge=0/0, ticks=667/219, in_queue=886, util=90.50% 00:33:06.934 nvme0n3: ios=35/512, merge=0/0, ticks=1385/315, in_queue=1700, util=91.96% 00:33:06.934 nvme0n4: ios=534/588, merge=0/0, ticks=1313/262, in_queue=1575, util=94.11% 00:33:06.934 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:06.934 [global] 00:33:06.934 thread=1 00:33:06.934 invalidate=1 00:33:06.934 rw=randwrite 00:33:06.934 time_based=1 00:33:06.934 runtime=1 00:33:06.934 ioengine=libaio 00:33:06.934 direct=1 00:33:06.934 bs=4096 00:33:06.934 iodepth=1 00:33:06.934 norandommap=0 00:33:06.934 numjobs=1 00:33:06.934 00:33:06.934 verify_dump=1 00:33:06.934 verify_backlog=512 00:33:06.934 verify_state_save=0 00:33:06.934 do_verify=1 00:33:06.934 verify=crc32c-intel 00:33:06.934 [job0] 00:33:06.934 filename=/dev/nvme0n1 00:33:06.934 [job1] 00:33:06.934 filename=/dev/nvme0n2 00:33:06.934 [job2] 00:33:06.934 filename=/dev/nvme0n3 00:33:06.934 [job3] 00:33:06.934 filename=/dev/nvme0n4 00:33:06.934 Could not set queue depth (nvme0n1) 00:33:06.934 Could not set queue depth (nvme0n2) 00:33:06.934 Could not set queue depth (nvme0n3) 00:33:06.934 Could not set queue depth (nvme0n4) 00:33:07.193 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:07.193 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:07.193 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:07.193 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:07.193 fio-3.35 00:33:07.193 Starting 4 threads 00:33:08.593 00:33:08.593 job0: (groupid=0, jobs=1): err= 0: pid=4188624: Wed Nov 20 14:53:15 2024 00:33:08.593 read: IOPS=18, BW=75.9KiB/s (77.7kB/s)(76.0KiB/1001msec) 00:33:08.593 slat (nsec): min=10955, max=25976, avg=24940.89, stdev=3389.75 00:33:08.593 clat (usec): min=40735, max=42036, avg=41062.69, stdev=320.06 00:33:08.593 lat (usec): min=40746, max=42061, avg=41087.63, stdev=320.90 00:33:08.593 clat percentiles (usec): 00:33:08.593 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:08.593 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:08.593 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:33:08.593 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:08.593 | 99.99th=[42206] 00:33:08.593 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:08.593 slat (nsec): min=3936, max=70347, avg=12128.51, stdev=6780.71 00:33:08.593 clat (usec): min=114, max=637, avg=414.86, stdev=71.38 00:33:08.593 lat (usec): min=119, max=672, avg=426.99, stdev=74.05 00:33:08.593 clat percentiles (usec): 00:33:08.593 | 1.00th=[ 260], 5.00th=[ 302], 10.00th=[ 322], 20.00th=[ 343], 00:33:08.593 | 30.00th=[ 379], 40.00th=[ 408], 50.00th=[ 424], 60.00th=[ 441], 00:33:08.593 | 70.00th=[ 453], 80.00th=[ 474], 90.00th=[ 498], 95.00th=[ 529], 00:33:08.593 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 635], 99.95th=[ 635], 00:33:08.593 | 99.99th=[ 635] 00:33:08.593 bw ( KiB/s): min= 4096, max= 4096, per=42.01%, avg=4096.00, stdev= 0.00, samples=1 00:33:08.593 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:08.593 lat (usec) : 250=0.94%, 500=86.44%, 750=9.04% 00:33:08.593 lat (msec) : 50=3.58% 00:33:08.593 cpu : usr=0.30%, sys=0.60%, ctx=532, majf=0, minf=1 00:33:08.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:08.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.593 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:08.593 job1: (groupid=0, jobs=1): err= 0: pid=4188632: Wed Nov 20 14:53:15 2024 00:33:08.593 read: IOPS=17, BW=70.9KiB/s (72.6kB/s)(72.0KiB/1015msec) 00:33:08.593 slat (nsec): min=3670, max=26488, avg=24086.33, stdev=6183.49 00:33:08.593 clat (usec): min=953, max=42058, avg=39666.34, stdev=9661.72 00:33:08.593 lat (usec): min=965, max=42084, avg=39690.43, stdev=9664.95 00:33:08.594 clat percentiles (usec): 00:33:08.594 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[41681], 20.00th=[41681], 00:33:08.594 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:08.594 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:08.594 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:08.594 | 99.99th=[42206] 00:33:08.594 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:33:08.594 slat (nsec): min=4070, max=51838, avg=15021.24, stdev=5868.30 00:33:08.594 clat (usec): min=179, max=1182, avg=567.31, stdev=156.85 00:33:08.594 lat (usec): min=183, max=1196, avg=582.33, stdev=157.65 00:33:08.594 clat percentiles (usec): 00:33:08.594 | 1.00th=[ 247], 5.00th=[ 318], 10.00th=[ 363], 20.00th=[ 437], 00:33:08.594 | 30.00th=[ 486], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 594], 00:33:08.594 | 70.00th=[ 644], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 816], 00:33:08.594 | 99.00th=[ 971], 99.50th=[ 1057], 99.90th=[ 1188], 99.95th=[ 1188], 00:33:08.594 | 99.99th=[ 1188] 00:33:08.594 bw ( KiB/s): min= 4096, max= 4096, per=42.01%, avg=4096.00, stdev= 0.00, samples=1 00:33:08.594 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:08.594 lat (usec) : 250=1.13%, 500=32.26%, 750=51.13%, 1000=11.32% 00:33:08.594 lat (msec) : 2=0.94%, 50=3.21% 00:33:08.594 cpu : usr=0.20%, sys=0.89%, ctx=531, majf=0, minf=1 00:33:08.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:08.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.594 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:08.594 job2: (groupid=0, jobs=1): err= 0: pid=4188653: Wed Nov 20 14:53:15 2024 00:33:08.594 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:33:08.594 slat (nsec): min=11631, max=14554, avg=12637.52, stdev=1008.07 00:33:08.594 clat (usec): min=367, max=41163, avg=35715.24, stdev=13924.51 00:33:08.594 lat (usec): min=381, max=41175, avg=35727.88, stdev=13924.12 00:33:08.594 clat percentiles (usec): 00:33:08.594 | 1.00th=[ 367], 5.00th=[ 603], 10.00th=[ 693], 20.00th=[41157], 00:33:08.594 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:08.594 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:08.594 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:08.594 | 99.99th=[41157] 00:33:08.594 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:33:08.594 slat (usec): min=3, max=19042, avg=51.49, stdev=840.97 00:33:08.594 clat (usec): min=122, max=669, avg=346.57, stdev=81.03 00:33:08.594 lat (usec): min=136, max=19291, avg=398.07, stdev=840.58 00:33:08.594 clat percentiles (usec): 00:33:08.594 | 1.00th=[ 174], 5.00th=[ 225], 10.00th=[ 255], 20.00th=[ 269], 00:33:08.594 | 30.00th=[ 293], 40.00th=[ 322], 50.00th=[ 355], 60.00th=[ 375], 00:33:08.594 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 433], 95.00th=[ 469], 00:33:08.594 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 668], 99.95th=[ 668], 00:33:08.594 | 99.99th=[ 668] 00:33:08.594 bw ( KiB/s): min= 4096, max= 4096, per=42.01%, avg=4096.00, stdev= 0.00, samples=1 00:33:08.594 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:08.594 lat (usec) : 250=7.29%, 500=85.05%, 750=3.93% 00:33:08.594 lat (msec) : 50=3.74% 00:33:08.594 cpu : usr=0.88%, sys=0.88%, ctx=537, majf=0, minf=1 00:33:08.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:08.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.594 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:08.594 job3: (groupid=0, jobs=1): err= 0: pid=4188661: Wed Nov 20 14:53:15 2024 00:33:08.594 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:08.594 slat (nsec): min=4342, max=36709, avg=15137.48, stdev=3200.25 00:33:08.594 clat (usec): min=599, max=1254, avg=936.71, stdev=95.08 00:33:08.594 lat (usec): min=611, max=1270, avg=951.84, stdev=94.95 00:33:08.594 clat percentiles (usec): 00:33:08.594 | 1.00th=[ 652], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 881], 00:33:08.594 | 30.00th=[ 914], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 963], 00:33:08.594 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1037], 95.00th=[ 1074], 00:33:08.594 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1254], 99.95th=[ 1254], 00:33:08.594 | 99.99th=[ 1254] 00:33:08.594 write: IOPS=969, BW=3876KiB/s (3969kB/s)(3880KiB/1001msec); 0 zone resets 00:33:08.594 slat (nsec): min=3898, max=51165, avg=13273.43, stdev=5010.05 00:33:08.594 clat (usec): min=180, max=1139, avg=509.27, stdev=137.26 00:33:08.594 lat (usec): min=185, max=1153, avg=522.54, stdev=138.43 00:33:08.594 clat percentiles (usec): 00:33:08.594 | 1.00th=[ 229], 5.00th=[ 302], 10.00th=[ 343], 20.00th=[ 408], 00:33:08.594 | 30.00th=[ 429], 40.00th=[ 465], 50.00th=[ 502], 60.00th=[ 537], 00:33:08.594 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 685], 95.00th=[ 758], 00:33:08.594 | 99.00th=[ 873], 99.50th=[ 988], 99.90th=[ 1139], 99.95th=[ 1139], 00:33:08.594 | 99.99th=[ 1139] 00:33:08.594 bw ( KiB/s): min= 4096, max= 4096, per=42.01%, avg=4096.00, stdev= 0.00, samples=1 00:33:08.594 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:08.594 lat (usec) : 250=1.01%, 500=31.04%, 750=31.44%, 1000=29.76% 00:33:08.594 lat (msec) : 2=6.75% 00:33:08.594 cpu : usr=0.90%, sys=2.10%, ctx=1483, majf=0, minf=1 00:33:08.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:08.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.594 issued rwts: total=512,970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:08.594 00:33:08.594 Run status group 0 (all jobs): 00:33:08.594 READ: bw=2226KiB/s (2279kB/s), 70.9KiB/s-2046KiB/s (72.6kB/s-2095kB/s), io=2288KiB (2343kB), run=1001-1028msec 00:33:08.594 WRITE: bw=9751KiB/s (9985kB/s), 1992KiB/s-3876KiB/s (2040kB/s-3969kB/s), io=9.79MiB (10.3MB), run=1001-1028msec 00:33:08.594 00:33:08.594 Disk stats (read/write): 00:33:08.594 nvme0n1: ios=65/512, merge=0/0, ticks=636/206, in_queue=842, util=85.67% 00:33:08.594 nvme0n2: ios=64/512, merge=0/0, ticks=1369/272, in_queue=1641, util=88.67% 00:33:08.594 nvme0n3: ios=73/512, merge=0/0, ticks=981/137, in_queue=1118, util=93.35% 00:33:08.594 nvme0n4: ios=571/635, merge=0/0, ticks=684/321, in_queue=1005, util=94.33% 00:33:08.594 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:08.594 [global] 00:33:08.594 thread=1 00:33:08.594 invalidate=1 00:33:08.594 rw=write 00:33:08.594 time_based=1 00:33:08.594 runtime=1 00:33:08.594 ioengine=libaio 00:33:08.594 direct=1 00:33:08.594 bs=4096 00:33:08.594 iodepth=128 00:33:08.594 norandommap=0 00:33:08.594 numjobs=1 00:33:08.594 00:33:08.594 verify_dump=1 00:33:08.594 verify_backlog=512 00:33:08.594 verify_state_save=0 00:33:08.594 do_verify=1 00:33:08.594 verify=crc32c-intel 00:33:08.594 [job0] 00:33:08.594 filename=/dev/nvme0n1 00:33:08.594 [job1] 00:33:08.594 filename=/dev/nvme0n2 00:33:08.594 [job2] 00:33:08.594 filename=/dev/nvme0n3 00:33:08.594 [job3] 00:33:08.594 filename=/dev/nvme0n4 00:33:08.594 Could not set queue depth (nvme0n1) 00:33:08.594 Could not set queue depth (nvme0n2) 00:33:08.594 Could not set queue depth (nvme0n3) 00:33:08.594 Could not set queue depth (nvme0n4) 00:33:08.854 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:08.854 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:08.854 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:08.854 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:08.854 fio-3.35 00:33:08.854 Starting 4 threads 00:33:10.233 00:33:10.233 job0: (groupid=0, jobs=1): err= 0: pid=4189132: Wed Nov 20 14:53:16 2024 00:33:10.233 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:33:10.233 slat (nsec): min=925, max=10108k, avg=81549.25, stdev=538946.04 00:33:10.233 clat (usec): min=1330, max=34691, avg=10570.99, stdev=4713.63 00:33:10.233 lat (usec): min=1343, max=34699, avg=10652.54, stdev=4754.60 00:33:10.233 clat percentiles (usec): 00:33:10.233 | 1.00th=[ 2409], 5.00th=[ 3785], 10.00th=[ 4948], 20.00th=[ 6587], 00:33:10.234 | 30.00th=[ 7570], 40.00th=[ 8717], 50.00th=[ 9896], 60.00th=[10945], 00:33:10.234 | 70.00th=[12780], 80.00th=[15401], 90.00th=[17171], 95.00th=[18482], 00:33:10.234 | 99.00th=[23725], 99.50th=[27657], 99.90th=[34866], 99.95th=[34866], 00:33:10.234 | 99.99th=[34866] 00:33:10.234 write: IOPS=6122, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:33:10.234 slat (nsec): min=1608, max=18244k, avg=72980.71, stdev=510452.02 00:33:10.234 clat (usec): min=694, max=48315, avg=10128.62, stdev=7007.82 00:33:10.234 lat (usec): min=704, max=48321, avg=10201.60, stdev=7055.86 00:33:10.234 clat percentiles (usec): 00:33:10.234 | 1.00th=[ 2343], 5.00th=[ 3458], 10.00th=[ 4883], 20.00th=[ 5604], 00:33:10.234 | 30.00th=[ 6652], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 9241], 00:33:10.234 | 70.00th=[ 9634], 80.00th=[12125], 90.00th=[19530], 95.00th=[25297], 00:33:10.234 | 99.00th=[38011], 99.50th=[43779], 99.90th=[48497], 99.95th=[48497], 00:33:10.234 | 99.99th=[48497] 00:33:10.234 bw ( KiB/s): min=24576, max=24576, per=29.12%, avg=24576.00, stdev= 0.00, samples=2 00:33:10.234 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:33:10.234 lat (usec) : 750=0.05%, 1000=0.13% 00:33:10.234 lat (msec) : 2=0.24%, 4=6.05%, 10=56.45%, 20=31.48%, 50=5.61% 00:33:10.234 cpu : usr=4.09%, sys=5.78%, ctx=554, majf=0, minf=1 00:33:10.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:10.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:10.234 issued rwts: total=6144,6147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:10.234 job1: (groupid=0, jobs=1): err= 0: pid=4189144: Wed Nov 20 14:53:16 2024 00:33:10.234 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:33:10.234 slat (nsec): min=919, max=10860k, avg=61629.60, stdev=427983.04 00:33:10.234 clat (usec): min=1829, max=26392, avg=8131.90, stdev=3571.91 00:33:10.234 lat (usec): min=1835, max=33175, avg=8193.53, stdev=3602.47 00:33:10.234 clat percentiles (usec): 00:33:10.234 | 1.00th=[ 2737], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5735], 00:33:10.234 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7177], 60.00th=[ 7701], 00:33:10.234 | 70.00th=[ 8455], 80.00th=[10028], 90.00th=[14091], 95.00th=[15664], 00:33:10.234 | 99.00th=[19792], 99.50th=[21890], 99.90th=[23725], 99.95th=[23725], 00:33:10.234 | 99.99th=[26346] 00:33:10.234 write: IOPS=7653, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1003msec); 0 zone resets 00:33:10.234 slat (nsec): min=1614, max=20248k, avg=66906.92, stdev=458948.21 00:33:10.234 clat (usec): min=739, max=61539, avg=8968.30, stdev=8490.16 00:33:10.234 lat (usec): min=752, max=61985, avg=9035.21, stdev=8541.36 00:33:10.234 clat percentiles (usec): 00:33:10.234 | 1.00th=[ 2089], 5.00th=[ 4047], 10.00th=[ 4424], 20.00th=[ 5211], 00:33:10.234 | 30.00th=[ 5604], 40.00th=[ 5997], 50.00th=[ 6456], 60.00th=[ 6783], 00:33:10.234 | 70.00th=[ 7439], 80.00th=[ 8717], 90.00th=[16909], 95.00th=[29492], 00:33:10.234 | 99.00th=[47973], 99.50th=[53216], 99.90th=[61080], 99.95th=[61604], 00:33:10.234 | 99.99th=[61604] 00:33:10.234 bw ( KiB/s): min=26176, max=34216, per=35.78%, avg=30196.00, stdev=5685.14, samples=2 00:33:10.234 iops : min= 6544, max= 8554, avg=7549.00, stdev=1421.28, samples=2 00:33:10.234 lat (usec) : 750=0.02%, 1000=0.07% 00:33:10.234 lat (msec) : 2=0.49%, 4=3.92%, 10=78.23%, 20=12.60%, 50=4.38% 00:33:10.234 lat (msec) : 100=0.30% 00:33:10.234 cpu : usr=4.29%, sys=6.49%, ctx=782, majf=0, minf=1 00:33:10.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:10.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:10.234 issued rwts: total=7168,7676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:10.234 job2: (groupid=0, jobs=1): err= 0: pid=4189160: Wed Nov 20 14:53:16 2024 00:33:10.234 read: IOPS=3872, BW=15.1MiB/s (15.9MB/s)(15.8MiB/1045msec) 00:33:10.234 slat (nsec): min=921, max=13726k, avg=140104.73, stdev=901339.54 00:33:10.234 clat (usec): min=4454, max=59078, avg=19488.65, stdev=12237.50 00:33:10.234 lat (usec): min=4460, max=64701, avg=19628.75, stdev=12279.95 00:33:10.234 clat percentiles (usec): 00:33:10.234 | 1.00th=[ 5473], 5.00th=[ 6259], 10.00th=[ 7308], 20.00th=[ 8717], 00:33:10.234 | 30.00th=[ 9896], 40.00th=[12780], 50.00th=[16188], 60.00th=[20317], 00:33:10.234 | 70.00th=[24249], 80.00th=[28967], 90.00th=[36439], 95.00th=[44303], 00:33:10.234 | 99.00th=[58459], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:33:10.234 | 99.99th=[58983] 00:33:10.234 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:33:10.234 slat (nsec): min=1575, max=8973.3k, avg=96450.55, stdev=586855.55 00:33:10.234 clat (usec): min=865, max=27418, avg=13104.67, stdev=4844.07 00:33:10.234 lat (usec): min=874, max=27425, avg=13201.12, stdev=4870.36 00:33:10.234 clat percentiles (usec): 00:33:10.234 | 1.00th=[ 4146], 5.00th=[ 5276], 10.00th=[ 5669], 20.00th=[ 8586], 00:33:10.234 | 30.00th=[10552], 40.00th=[12256], 50.00th=[13173], 60.00th=[14222], 00:33:10.234 | 70.00th=[15533], 80.00th=[17433], 90.00th=[19530], 95.00th=[20055], 00:33:10.234 | 99.00th=[24511], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:33:10.234 | 99.99th=[27395] 00:33:10.234 bw ( KiB/s): min=12288, max=20480, per=19.41%, avg=16384.00, stdev=5792.62, samples=2 00:33:10.234 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:33:10.234 lat (usec) : 1000=0.11% 00:33:10.234 lat (msec) : 2=0.10%, 10=29.24%, 20=47.37%, 50=21.85%, 100=1.34% 00:33:10.234 cpu : usr=2.01%, sys=3.54%, ctx=361, majf=0, minf=1 00:33:10.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:10.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:10.234 issued rwts: total=4047,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:10.234 job3: (groupid=0, jobs=1): err= 0: pid=4189169: Wed Nov 20 14:53:16 2024 00:33:10.234 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:33:10.234 slat (nsec): min=1165, max=9703.8k, avg=117098.29, stdev=759154.01 00:33:10.234 clat (usec): min=3641, max=27823, avg=14749.94, stdev=4195.97 00:33:10.234 lat (usec): min=3645, max=31823, avg=14867.04, stdev=4255.55 00:33:10.234 clat percentiles (usec): 00:33:10.234 | 1.00th=[ 7373], 5.00th=[ 7570], 10.00th=[ 9110], 20.00th=[11338], 00:33:10.234 | 30.00th=[12256], 40.00th=[13304], 50.00th=[14615], 60.00th=[16057], 00:33:10.234 | 70.00th=[17433], 80.00th=[18744], 90.00th=[20317], 95.00th=[20841], 00:33:10.234 | 99.00th=[23987], 99.50th=[26084], 99.90th=[27919], 99.95th=[27919], 00:33:10.234 | 99.99th=[27919] 00:33:10.234 write: IOPS=4113, BW=16.1MiB/s (16.8MB/s)(16.1MiB/1004msec); 0 zone resets 00:33:10.234 slat (nsec): min=1635, max=17975k, avg=119573.77, stdev=709015.44 00:33:10.234 clat (usec): min=1231, max=39962, avg=16219.86, stdev=8555.47 00:33:10.234 lat (usec): min=1242, max=39971, avg=16339.43, stdev=8629.97 00:33:10.234 clat percentiles (usec): 00:33:10.234 | 1.00th=[ 4178], 5.00th=[ 5932], 10.00th=[ 7504], 20.00th=[ 9241], 00:33:10.234 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13698], 60.00th=[15139], 00:33:10.234 | 70.00th=[17957], 80.00th=[21890], 90.00th=[30540], 95.00th=[34866], 00:33:10.234 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:33:10.234 | 99.99th=[40109] 00:33:10.234 bw ( KiB/s): min=16384, max=16384, per=19.41%, avg=16384.00, stdev= 0.00, samples=2 00:33:10.234 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:33:10.234 lat (msec) : 2=0.02%, 4=0.58%, 10=17.36%, 20=64.55%, 50=17.48% 00:33:10.234 cpu : usr=3.29%, sys=3.09%, ctx=325, majf=0, minf=2 00:33:10.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:10.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:10.234 issued rwts: total=4096,4130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:10.234 00:33:10.234 Run status group 0 (all jobs): 00:33:10.234 READ: bw=80.2MiB/s (84.1MB/s), 15.1MiB/s-27.9MiB/s (15.9MB/s-29.3MB/s), io=83.8MiB (87.9MB), run=1003-1045msec 00:33:10.234 WRITE: bw=82.4MiB/s (86.4MB/s), 15.3MiB/s-29.9MiB/s (16.1MB/s-31.3MB/s), io=86.1MiB (90.3MB), run=1003-1045msec 00:33:10.234 00:33:10.235 Disk stats (read/write): 00:33:10.235 nvme0n1: ios=5144/5155, merge=0/0, ticks=26111/29808, in_queue=55919, util=99.80% 00:33:10.235 nvme0n2: ios=5655/5994, merge=0/0, ticks=22998/29228, in_queue=52226, util=98.47% 00:33:10.235 nvme0n3: ios=3121/3106, merge=0/0, ticks=19233/13777, in_queue=33010, util=98.31% 00:33:10.235 nvme0n4: ios=3341/3584, merge=0/0, ticks=23442/30971, in_queue=54413, util=95.62% 00:33:10.235 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:10.235 [global] 00:33:10.235 thread=1 00:33:10.235 invalidate=1 00:33:10.235 rw=randwrite 00:33:10.235 time_based=1 00:33:10.235 runtime=1 00:33:10.235 ioengine=libaio 00:33:10.235 direct=1 00:33:10.235 bs=4096 00:33:10.235 iodepth=128 00:33:10.235 norandommap=0 00:33:10.235 numjobs=1 00:33:10.235 00:33:10.235 verify_dump=1 00:33:10.235 verify_backlog=512 00:33:10.235 verify_state_save=0 00:33:10.235 do_verify=1 00:33:10.235 verify=crc32c-intel 00:33:10.235 [job0] 00:33:10.235 filename=/dev/nvme0n1 00:33:10.235 [job1] 00:33:10.235 filename=/dev/nvme0n2 00:33:10.235 [job2] 00:33:10.235 filename=/dev/nvme0n3 00:33:10.235 [job3] 00:33:10.235 filename=/dev/nvme0n4 00:33:10.235 Could not set queue depth (nvme0n1) 00:33:10.235 Could not set queue depth (nvme0n2) 00:33:10.235 Could not set queue depth (nvme0n3) 00:33:10.235 Could not set queue depth (nvme0n4) 00:33:10.495 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:10.495 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:10.495 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:10.495 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:10.495 fio-3.35 00:33:10.495 Starting 4 threads 00:33:11.875 00:33:11.875 job0: (groupid=0, jobs=1): err= 0: pid=4189656: Wed Nov 20 14:53:18 2024 00:33:11.875 read: IOPS=5544, BW=21.7MiB/s (22.7MB/s)(21.8MiB/1006msec) 00:33:11.875 slat (nsec): min=893, max=15110k, avg=88048.18, stdev=695945.40 00:33:11.875 clat (usec): min=1172, max=32084, avg=10664.52, stdev=4982.75 00:33:11.875 lat (usec): min=2798, max=32087, avg=10752.57, stdev=5030.30 00:33:11.875 clat percentiles (usec): 00:33:11.875 | 1.00th=[ 3359], 5.00th=[ 5211], 10.00th=[ 6325], 20.00th=[ 6980], 00:33:11.875 | 30.00th=[ 7439], 40.00th=[ 8160], 50.00th=[ 9372], 60.00th=[10290], 00:33:11.875 | 70.00th=[11863], 80.00th=[14353], 90.00th=[16909], 95.00th=[21365], 00:33:11.875 | 99.00th=[27132], 99.50th=[30016], 99.90th=[31327], 99.95th=[32113], 00:33:11.875 | 99.99th=[32113] 00:33:11.875 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:33:11.875 slat (nsec): min=1537, max=13313k, avg=83578.25, stdev=517633.23 00:33:11.875 clat (usec): min=1372, max=33213, avg=12089.74, stdev=5452.16 00:33:11.875 lat (usec): min=1389, max=33216, avg=12173.31, stdev=5487.07 00:33:11.875 clat percentiles (usec): 00:33:11.875 | 1.00th=[ 2245], 5.00th=[ 4228], 10.00th=[ 5604], 20.00th=[ 7111], 00:33:11.875 | 30.00th=[ 9110], 40.00th=[11076], 50.00th=[12780], 60.00th=[13435], 00:33:11.875 | 70.00th=[13829], 80.00th=[14746], 90.00th=[18482], 95.00th=[22938], 00:33:11.875 | 99.00th=[29754], 99.50th=[32113], 99.90th=[33162], 99.95th=[33162], 00:33:11.875 | 99.99th=[33162] 00:33:11.875 bw ( KiB/s): min=22280, max=22776, per=22.93%, avg=22528.00, stdev=350.72, samples=2 00:33:11.875 iops : min= 5570, max= 5694, avg=5632.00, stdev=87.68, samples=2 00:33:11.875 lat (msec) : 2=0.45%, 4=2.63%, 10=42.66%, 20=46.91%, 50=7.34% 00:33:11.875 cpu : usr=1.99%, sys=2.89%, ctx=555, majf=0, minf=1 00:33:11.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:11.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:11.875 issued rwts: total=5578,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:11.875 job1: (groupid=0, jobs=1): err= 0: pid=4189669: Wed Nov 20 14:53:18 2024 00:33:11.875 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:33:11.875 slat (nsec): min=935, max=14498k, avg=82463.68, stdev=618811.46 00:33:11.875 clat (usec): min=2574, max=44209, avg=10548.90, stdev=6237.56 00:33:11.875 lat (usec): min=2579, max=44235, avg=10631.37, stdev=6297.76 00:33:11.875 clat percentiles (usec): 00:33:11.875 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7046], 00:33:11.875 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8586], 00:33:11.875 | 70.00th=[ 9634], 80.00th=[12649], 90.00th=[20579], 95.00th=[26870], 00:33:11.875 | 99.00th=[31327], 99.50th=[32637], 99.90th=[34341], 99.95th=[38011], 00:33:11.875 | 99.99th=[44303] 00:33:11.875 write: IOPS=6127, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:33:11.875 slat (nsec): min=1597, max=7798.0k, avg=75858.60, stdev=401681.83 00:33:11.875 clat (usec): min=1319, max=37104, avg=10107.53, stdev=5472.12 00:33:11.875 lat (usec): min=1817, max=37108, avg=10183.39, stdev=5504.22 00:33:11.875 clat percentiles (usec): 00:33:11.875 | 1.00th=[ 2868], 5.00th=[ 4621], 10.00th=[ 6063], 20.00th=[ 6915], 00:33:11.875 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 9110], 00:33:11.875 | 70.00th=[12256], 80.00th=[13698], 90.00th=[14877], 95.00th=[21365], 00:33:11.875 | 99.00th=[31851], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:33:11.875 | 99.99th=[36963] 00:33:11.875 bw ( KiB/s): min=20480, max=28672, per=25.01%, avg=24576.00, stdev=5792.62, samples=2 00:33:11.875 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:33:11.875 lat (msec) : 2=0.07%, 4=2.14%, 10=67.10%, 20=22.04%, 50=8.64% 00:33:11.875 cpu : usr=2.50%, sys=4.39%, ctx=683, majf=0, minf=1 00:33:11.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:11.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:11.876 issued rwts: total=6144,6146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:11.876 job2: (groupid=0, jobs=1): err= 0: pid=4189683: Wed Nov 20 14:53:18 2024 00:33:11.876 read: IOPS=6365, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1005msec) 00:33:11.876 slat (nsec): min=948, max=8221.2k, avg=72489.45, stdev=459853.34 00:33:11.876 clat (usec): min=4657, max=23855, avg=9082.70, stdev=2391.60 00:33:11.876 lat (usec): min=5158, max=28504, avg=9155.18, stdev=2420.22 00:33:11.876 clat percentiles (usec): 00:33:11.876 | 1.00th=[ 5866], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7635], 00:33:11.876 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:33:11.876 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[11469], 95.00th=[14222], 00:33:11.876 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:33:11.876 | 99.99th=[23987] 00:33:11.876 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:33:11.876 slat (nsec): min=1593, max=12803k, avg=77626.82, stdev=472372.46 00:33:11.876 clat (usec): min=1185, max=38280, avg=10395.64, stdev=5240.98 00:33:11.876 lat (usec): min=1189, max=38285, avg=10473.26, stdev=5281.61 00:33:11.876 clat percentiles (usec): 00:33:11.876 | 1.00th=[ 5080], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 7832], 00:33:11.876 | 30.00th=[ 8029], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8586], 00:33:11.876 | 70.00th=[ 9372], 80.00th=[12387], 90.00th=[17171], 95.00th=[21627], 00:33:11.876 | 99.00th=[31589], 99.50th=[34866], 99.90th=[36963], 99.95th=[38536], 00:33:11.876 | 99.99th=[38536] 00:33:11.876 bw ( KiB/s): min=25248, max=28000, per=27.09%, avg=26624.00, stdev=1945.96, samples=2 00:33:11.876 iops : min= 6312, max= 7000, avg=6656.00, stdev=486.49, samples=2 00:33:11.876 lat (msec) : 2=0.07%, 4=0.05%, 10=76.56%, 20=19.36%, 50=3.96% 00:33:11.876 cpu : usr=2.89%, sys=3.48%, ctx=817, majf=0, minf=1 00:33:11.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:11.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:11.876 issued rwts: total=6397,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:11.876 job3: (groupid=0, jobs=1): err= 0: pid=4189691: Wed Nov 20 14:53:18 2024 00:33:11.876 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:33:11.876 slat (nsec): min=943, max=12434k, avg=79282.72, stdev=648973.04 00:33:11.876 clat (usec): min=2065, max=26946, avg=10367.31, stdev=3829.08 00:33:11.876 lat (usec): min=2069, max=26972, avg=10446.59, stdev=3872.47 00:33:11.876 clat percentiles (usec): 00:33:11.876 | 1.00th=[ 5473], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7373], 00:33:11.876 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 9110], 60.00th=[ 9896], 00:33:11.876 | 70.00th=[11731], 80.00th=[13698], 90.00th=[15533], 95.00th=[18744], 00:33:11.876 | 99.00th=[21890], 99.50th=[22414], 99.90th=[25822], 99.95th=[25822], 00:33:11.876 | 99.99th=[26870] 00:33:11.876 write: IOPS=6247, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1005msec); 0 zone resets 00:33:11.876 slat (nsec): min=1556, max=12202k, avg=76024.37, stdev=625811.80 00:33:11.876 clat (usec): min=485, max=33779, avg=10180.10, stdev=4904.02 00:33:11.876 lat (usec): min=491, max=33802, avg=10256.12, stdev=4953.80 00:33:11.876 clat percentiles (usec): 00:33:11.876 | 1.00th=[ 2835], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 6915], 00:33:11.876 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 9765], 00:33:11.876 | 70.00th=[11600], 80.00th=[14222], 90.00th=[17695], 95.00th=[20055], 00:33:11.876 | 99.00th=[24249], 99.50th=[24511], 99.90th=[29754], 99.95th=[30540], 00:33:11.876 | 99.99th=[33817] 00:33:11.876 bw ( KiB/s): min=20536, max=28672, per=25.04%, avg=24604.00, stdev=5753.02, samples=2 00:33:11.876 iops : min= 5134, max= 7168, avg=6151.00, stdev=1438.26, samples=2 00:33:11.876 lat (usec) : 500=0.02%, 1000=0.07% 00:33:11.876 lat (msec) : 2=0.37%, 4=0.80%, 10=58.93%, 20=35.27%, 50=4.53% 00:33:11.876 cpu : usr=2.99%, sys=3.69%, ctx=364, majf=0, minf=1 00:33:11.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:11.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:11.876 issued rwts: total=6144,6279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:11.876 00:33:11.876 Run status group 0 (all jobs): 00:33:11.876 READ: bw=94.2MiB/s (98.8MB/s), 21.7MiB/s-24.9MiB/s (22.7MB/s-26.1MB/s), io=94.8MiB (99.4MB), run=1003-1006msec 00:33:11.876 WRITE: bw=96.0MiB/s (101MB/s), 21.9MiB/s-25.9MiB/s (22.9MB/s-27.1MB/s), io=96.5MiB (101MB), run=1003-1006msec 00:33:11.876 00:33:11.876 Disk stats (read/write): 00:33:11.876 nvme0n1: ios=4272/4608, merge=0/0, ticks=45556/57687, in_queue=103243, util=85.57% 00:33:11.876 nvme0n2: ios=4634/4734, merge=0/0, ticks=29197/28297, in_queue=57494, util=97.04% 00:33:11.876 nvme0n3: ios=5618/5632, merge=0/0, ticks=27456/29135, in_queue=56591, util=92.81% 00:33:11.876 nvme0n4: ios=5269/5632, merge=0/0, ticks=43384/46013, in_queue=89397, util=95.72% 00:33:11.876 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:11.876 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4189839 00:33:11.876 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:11.876 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:11.876 [global] 00:33:11.876 thread=1 00:33:11.876 invalidate=1 00:33:11.876 rw=read 00:33:11.876 time_based=1 00:33:11.876 runtime=10 00:33:11.876 ioengine=libaio 00:33:11.876 direct=1 00:33:11.876 bs=4096 00:33:11.876 iodepth=1 00:33:11.876 norandommap=1 00:33:11.876 numjobs=1 00:33:11.876 00:33:11.876 [job0] 00:33:11.876 filename=/dev/nvme0n1 00:33:11.876 [job1] 00:33:11.876 filename=/dev/nvme0n2 00:33:11.876 [job2] 00:33:11.876 filename=/dev/nvme0n3 00:33:11.876 [job3] 00:33:11.876 filename=/dev/nvme0n4 00:33:11.876 Could not set queue depth (nvme0n1) 00:33:11.876 Could not set queue depth (nvme0n2) 00:33:11.876 Could not set queue depth (nvme0n3) 00:33:11.876 Could not set queue depth (nvme0n4) 00:33:11.876 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:11.876 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:11.876 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:11.876 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:11.876 fio-3.35 00:33:11.876 Starting 4 threads 00:33:15.174 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:15.174 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:15.174 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=266240, buflen=4096 00:33:15.174 fio: pid=4190183, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:15.174 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11415552, buflen=4096 00:33:15.174 fio: pid=4190173, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:15.174 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:15.174 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:15.174 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11239424, buflen=4096 00:33:15.174 fio: pid=4190138, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:15.174 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:15.174 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:15.174 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=11460608, buflen=4096 00:33:15.174 fio: pid=4190151, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:33:15.174 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:15.174 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:15.433 00:33:15.433 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4190138: Wed Nov 20 14:53:22 2024 00:33:15.433 read: IOPS=927, BW=3709KiB/s (3798kB/s)(10.7MiB/2959msec) 00:33:15.433 slat (usec): min=2, max=18832, avg=41.22, stdev=607.89 00:33:15.433 clat (usec): min=460, max=1660, avg=1032.28, stdev=117.82 00:33:15.433 lat (usec): min=463, max=20014, avg=1073.51, stdev=624.68 00:33:15.433 clat percentiles (usec): 00:33:15.433 | 1.00th=[ 635], 5.00th=[ 791], 10.00th=[ 898], 20.00th=[ 971], 00:33:15.433 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1074], 00:33:15.433 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:33:15.433 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1352], 99.95th=[ 1385], 00:33:15.433 | 99.99th=[ 1663] 00:33:15.433 bw ( KiB/s): min= 3696, max= 4016, per=35.37%, avg=3798.40, stdev=125.65, samples=5 00:33:15.433 iops : min= 924, max= 1004, avg=949.60, stdev=31.41, samples=5 00:33:15.433 lat (usec) : 500=0.11%, 750=3.64%, 1000=25.94% 00:33:15.433 lat (msec) : 2=70.27% 00:33:15.433 cpu : usr=1.05%, sys=2.94%, ctx=2749, majf=0, minf=2 00:33:15.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.433 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.433 issued rwts: total=2745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:15.433 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=4190151: Wed Nov 20 14:53:22 2024 00:33:15.433 read: IOPS=895, BW=3579KiB/s (3665kB/s)(10.9MiB/3127msec) 00:33:15.433 slat (usec): min=3, max=20919, avg=50.70, stdev=747.14 00:33:15.433 clat (usec): min=398, max=6420, avg=1062.72, stdev=209.71 00:33:15.433 lat (usec): min=411, max=22111, avg=1113.43, stdev=781.41 00:33:15.433 clat percentiles (usec): 00:33:15.433 | 1.00th=[ 619], 5.00th=[ 750], 10.00th=[ 840], 20.00th=[ 955], 00:33:15.433 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1123], 00:33:15.433 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1270], 00:33:15.433 | 99.00th=[ 1369], 99.50th=[ 1401], 99.90th=[ 1582], 99.95th=[ 6194], 00:33:15.433 | 99.99th=[ 6390] 00:33:15.433 bw ( KiB/s): min= 3135, max= 3920, per=33.50%, avg=3597.17, stdev=261.14, samples=6 00:33:15.433 iops : min= 783, max= 980, avg=899.17, stdev=65.55, samples=6 00:33:15.433 lat (usec) : 500=0.14%, 750=5.04%, 1000=23.54% 00:33:15.433 lat (msec) : 2=71.17%, 10=0.07% 00:33:15.433 cpu : usr=0.29%, sys=1.92%, ctx=2805, majf=0, minf=1 00:33:15.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.433 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.433 issued rwts: total=2799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:15.433 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4190173: Wed Nov 20 14:53:22 2024 00:33:15.433 read: IOPS=997, BW=3990KiB/s (4086kB/s)(10.9MiB/2794msec) 00:33:15.433 slat (usec): min=2, max=22913, avg=30.03, stdev=488.67 00:33:15.433 clat (usec): min=451, max=1484, avg=968.00, stdev=79.05 00:33:15.433 lat (usec): min=463, max=23994, avg=998.03, stdev=497.61 00:33:15.433 clat percentiles (usec): 00:33:15.434 | 1.00th=[ 742], 5.00th=[ 824], 10.00th=[ 873], 20.00th=[ 914], 00:33:15.434 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:33:15.434 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:33:15.434 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1237], 99.95th=[ 1254], 00:33:15.434 | 99.99th=[ 1483] 00:33:15.434 bw ( KiB/s): min= 3992, max= 4080, per=37.68%, avg=4046.40, stdev=38.12, samples=5 00:33:15.434 iops : min= 998, max= 1020, avg=1011.60, stdev= 9.53, samples=5 00:33:15.434 lat (usec) : 500=0.04%, 750=1.33%, 1000=63.77% 00:33:15.434 lat (msec) : 2=34.83% 00:33:15.434 cpu : usr=1.32%, sys=2.86%, ctx=2790, majf=0, minf=2 00:33:15.434 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.434 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.434 issued rwts: total=2788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.434 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:15.434 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4190183: Wed Nov 20 14:53:22 2024 00:33:15.434 read: IOPS=24, BW=97.6KiB/s (99.9kB/s)(260KiB/2664msec) 00:33:15.434 slat (nsec): min=13418, max=34601, avg=26268.98, stdev=2508.88 00:33:15.434 clat (usec): min=939, max=42170, avg=40935.13, stdev=5059.28 00:33:15.434 lat (usec): min=973, max=42183, avg=40961.40, stdev=5058.19 00:33:15.434 clat percentiles (usec): 00:33:15.434 | 1.00th=[ 938], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:15.434 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:33:15.434 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:15.434 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:15.434 | 99.99th=[42206] 00:33:15.434 bw ( KiB/s): min= 96, max= 104, per=0.90%, avg=97.60, stdev= 3.58, samples=5 00:33:15.434 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:33:15.434 lat (usec) : 1000=1.52% 00:33:15.434 lat (msec) : 50=96.97% 00:33:15.434 cpu : usr=0.00%, sys=0.15%, ctx=66, majf=0, minf=2 00:33:15.434 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.434 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.434 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.434 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:15.434 00:33:15.434 Run status group 0 (all jobs): 00:33:15.434 READ: bw=10.5MiB/s (11.0MB/s), 97.6KiB/s-3990KiB/s (99.9kB/s-4086kB/s), io=32.8MiB (34.4MB), run=2664-3127msec 00:33:15.434 00:33:15.434 Disk stats (read/write): 00:33:15.434 nvme0n1: ios=2661/0, merge=0/0, ticks=2502/0, in_queue=2502, util=93.86% 00:33:15.434 nvme0n2: ios=2798/0, merge=0/0, ticks=3785/0, in_queue=3785, util=96.56% 00:33:15.434 nvme0n3: ios=2613/0, merge=0/0, ticks=2410/0, in_queue=2410, util=96.03% 00:33:15.434 nvme0n4: ios=63/0, merge=0/0, ticks=2580/0, in_queue=2580, util=96.42% 00:33:15.434 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:15.434 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:15.693 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:15.693 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:15.693 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:15.693 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:15.953 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:15.953 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 4189839 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:16.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:16.213 nvmf hotplug test: fio failed as expected 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:16.213 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:16.472 rmmod nvme_tcp 00:33:16.472 rmmod nvme_fabrics 00:33:16.472 rmmod nvme_keyring 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4186621 ']' 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4186621 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4186621 ']' 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4186621 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4186621 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4186621' 00:33:16.473 killing process with pid 4186621 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4186621 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4186621 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.473 14:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:19.012 00:33:19.012 real 0m24.673s 00:33:19.012 user 1m58.869s 00:33:19.012 sys 0m9.412s 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.012 ************************************ 00:33:19.012 END TEST nvmf_fio_target 00:33:19.012 ************************************ 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:19.012 ************************************ 00:33:19.012 START TEST nvmf_bdevio 00:33:19.012 ************************************ 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:19.012 * Looking for test storage... 00:33:19.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:19.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.012 --rc genhtml_branch_coverage=1 00:33:19.012 --rc genhtml_function_coverage=1 00:33:19.012 --rc genhtml_legend=1 00:33:19.012 --rc geninfo_all_blocks=1 00:33:19.012 --rc geninfo_unexecuted_blocks=1 00:33:19.012 00:33:19.012 ' 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:19.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.012 --rc genhtml_branch_coverage=1 00:33:19.012 --rc genhtml_function_coverage=1 00:33:19.012 --rc genhtml_legend=1 00:33:19.012 --rc geninfo_all_blocks=1 00:33:19.012 --rc geninfo_unexecuted_blocks=1 00:33:19.012 00:33:19.012 ' 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:19.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.012 --rc genhtml_branch_coverage=1 00:33:19.012 --rc genhtml_function_coverage=1 00:33:19.012 --rc genhtml_legend=1 00:33:19.012 --rc geninfo_all_blocks=1 00:33:19.012 --rc geninfo_unexecuted_blocks=1 00:33:19.012 00:33:19.012 ' 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:19.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.012 --rc genhtml_branch_coverage=1 00:33:19.012 --rc genhtml_function_coverage=1 00:33:19.012 --rc genhtml_legend=1 00:33:19.012 --rc geninfo_all_blocks=1 00:33:19.012 --rc geninfo_unexecuted_blocks=1 00:33:19.012 00:33:19.012 ' 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.012 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:19.013 14:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:24.292 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:24.293 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:24.293 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:24.293 Found net devices under 0000:31:00.0: cvl_0_0 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:24.293 Found net devices under 0000:31:00.1: cvl_0_1 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:24.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:33:24.293 00:33:24.293 --- 10.0.0.2 ping statistics --- 00:33:24.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.293 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:33:24.293 00:33:24.293 --- 10.0.0.1 ping statistics --- 00:33:24.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.293 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:24.293 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2040 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2040 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2040 ']' 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:24.293 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:24.294 [2024-11-20 14:53:31.045251] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:24.294 [2024-11-20 14:53:31.046222] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:33:24.294 [2024-11-20 14:53:31.046261] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.294 [2024-11-20 14:53:31.118146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:24.294 [2024-11-20 14:53:31.146828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.294 [2024-11-20 14:53:31.146858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.294 [2024-11-20 14:53:31.146863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.294 [2024-11-20 14:53:31.146868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.294 [2024-11-20 14:53:31.146872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.294 [2024-11-20 14:53:31.148089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:24.294 [2024-11-20 14:53:31.148242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:24.294 [2024-11-20 14:53:31.148406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:24.294 [2024-11-20 14:53:31.148496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:24.294 [2024-11-20 14:53:31.198429] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:24.294 [2024-11-20 14:53:31.199351] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:24.294 [2024-11-20 14:53:31.199433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:24.294 [2024-11-20 14:53:31.199619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:24.294 [2024-11-20 14:53:31.199843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:24.862 [2024-11-20 14:53:31.849223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:24.862 Malloc0 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:24.862 [2024-11-20 14:53:31.912997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.862 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.862 { 00:33:24.863 "params": { 00:33:24.863 "name": "Nvme$subsystem", 00:33:24.863 "trtype": "$TEST_TRANSPORT", 00:33:24.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.863 "adrfam": "ipv4", 00:33:24.863 "trsvcid": "$NVMF_PORT", 00:33:24.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.863 "hdgst": ${hdgst:-false}, 00:33:24.863 "ddgst": ${ddgst:-false} 00:33:24.863 }, 00:33:24.863 "method": "bdev_nvme_attach_controller" 00:33:24.863 } 00:33:24.863 EOF 00:33:24.863 )") 00:33:24.863 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:25.123 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:25.123 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:25.123 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:25.123 "params": { 00:33:25.123 "name": "Nvme1", 00:33:25.123 "trtype": "tcp", 00:33:25.123 "traddr": "10.0.0.2", 00:33:25.123 "adrfam": "ipv4", 00:33:25.123 "trsvcid": "4420", 00:33:25.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:25.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:25.123 "hdgst": false, 00:33:25.123 "ddgst": false 00:33:25.123 }, 00:33:25.123 "method": "bdev_nvme_attach_controller" 00:33:25.123 }' 00:33:25.123 [2024-11-20 14:53:31.950087] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:33:25.123 [2024-11-20 14:53:31.950141] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391 ] 00:33:25.123 [2024-11-20 14:53:32.015898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:25.123 [2024-11-20 14:53:32.048460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.123 [2024-11-20 14:53:32.048613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.123 [2024-11-20 14:53:32.048614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:25.382 I/O targets: 00:33:25.382 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:25.382 00:33:25.382 00:33:25.382 CUnit - A unit testing framework for C - Version 2.1-3 00:33:25.382 http://cunit.sourceforge.net/ 00:33:25.382 00:33:25.382 00:33:25.382 Suite: bdevio tests on: Nvme1n1 00:33:25.382 Test: blockdev write read block ...passed 00:33:25.382 Test: blockdev write zeroes read block ...passed 00:33:25.382 Test: blockdev write zeroes read no split ...passed 00:33:25.382 Test: blockdev write zeroes read split ...passed 00:33:25.382 Test: blockdev write zeroes read split partial ...passed 00:33:25.382 Test: blockdev reset ...[2024-11-20 14:53:32.333372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:25.382 [2024-11-20 14:53:32.333426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b314b0 (9): Bad file descriptor 00:33:25.382 [2024-11-20 14:53:32.426314] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:25.382 passed 00:33:25.382 Test: blockdev write read 8 blocks ...passed 00:33:25.641 Test: blockdev write read size > 128k ...passed 00:33:25.641 Test: blockdev write read invalid size ...passed 00:33:25.641 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:25.641 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:25.641 Test: blockdev write read max offset ...passed 00:33:25.641 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:25.641 Test: blockdev writev readv 8 blocks ...passed 00:33:25.641 Test: blockdev writev readv 30 x 1block ...passed 00:33:25.641 Test: blockdev writev readv block ...passed 00:33:25.641 Test: blockdev writev readv size > 128k ...passed 00:33:25.641 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:25.641 Test: blockdev comparev and writev ...[2024-11-20 14:53:32.685670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:25.641 [2024-11-20 14:53:32.685694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.641 [2024-11-20 14:53:32.685705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:25.641 [2024-11-20 14:53:32.685711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.641 [2024-11-20 14:53:32.686097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:25.641 [2024-11-20 14:53:32.686106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:25.641 [2024-11-20 14:53:32.686115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:25.641 [2024-11-20 14:53:32.686121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:25.641 [2024-11-20 14:53:32.686487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:25.641 [2024-11-20 14:53:32.686495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.641 [2024-11-20 14:53:32.686505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:25.641 [2024-11-20 14:53:32.686510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:25.641 [2024-11-20 14:53:32.686880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:25.641 [2024-11-20 14:53:32.686888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:25.641 [2024-11-20 14:53:32.686898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:25.641 [2024-11-20 14:53:32.686904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:25.901 passed 00:33:25.901 Test: blockdev nvme passthru rw ...passed 00:33:25.901 Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:53:32.769716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:25.901 [2024-11-20 14:53:32.769727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:25.901 [2024-11-20 14:53:32.769929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:25.901 [2024-11-20 14:53:32.769937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:25.901 [2024-11-20 14:53:32.770188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:25.901 [2024-11-20 14:53:32.770196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:25.901 [2024-11-20 14:53:32.770418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:25.901 [2024-11-20 14:53:32.770426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:25.901 passed 00:33:25.901 Test: blockdev nvme admin passthru ...passed 00:33:25.901 Test: blockdev copy ...passed 00:33:25.901 00:33:25.901 Run Summary: Type Total Ran Passed Failed Inactive 00:33:25.901 suites 1 1 n/a 0 0 00:33:25.901 tests 23 23 23 0 0 00:33:25.901 asserts 152 152 152 0 n/a 00:33:25.901 00:33:25.901 Elapsed time = 1.290 seconds 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.901 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.901 rmmod nvme_tcp 00:33:25.901 rmmod nvme_fabrics 00:33:26.160 rmmod nvme_keyring 00:33:26.160 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.160 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:26.160 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:26.160 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2040 ']' 00:33:26.160 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2040 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2040 ']' 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2040 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2040 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2040' 00:33:26.160 killing process with pid 2040 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2040 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2040 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.160 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.697 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:28.697 00:33:28.697 real 0m9.634s 00:33:28.697 user 0m7.998s 00:33:28.697 sys 0m4.641s 00:33:28.697 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.697 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:28.697 ************************************ 00:33:28.697 END TEST nvmf_bdevio 00:33:28.697 ************************************ 00:33:28.697 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:28.697 00:33:28.697 real 4m23.780s 00:33:28.697 user 9m31.823s 00:33:28.697 sys 1m37.801s 00:33:28.697 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.697 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:28.697 ************************************ 00:33:28.697 END TEST nvmf_target_core_interrupt_mode 00:33:28.697 ************************************ 00:33:28.697 14:53:35 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:28.697 14:53:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:28.697 14:53:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:28.697 14:53:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:28.697 ************************************ 00:33:28.697 START TEST nvmf_interrupt 00:33:28.697 ************************************ 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:28.697 * Looking for test storage... 00:33:28.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:28.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.697 --rc genhtml_branch_coverage=1 00:33:28.697 --rc genhtml_function_coverage=1 00:33:28.697 --rc genhtml_legend=1 00:33:28.697 --rc geninfo_all_blocks=1 00:33:28.697 --rc geninfo_unexecuted_blocks=1 00:33:28.697 00:33:28.697 ' 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:28.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.697 --rc genhtml_branch_coverage=1 00:33:28.697 --rc genhtml_function_coverage=1 00:33:28.697 --rc genhtml_legend=1 00:33:28.697 --rc geninfo_all_blocks=1 00:33:28.697 --rc geninfo_unexecuted_blocks=1 00:33:28.697 00:33:28.697 ' 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:28.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.697 --rc genhtml_branch_coverage=1 00:33:28.697 --rc genhtml_function_coverage=1 00:33:28.697 --rc genhtml_legend=1 00:33:28.697 --rc geninfo_all_blocks=1 00:33:28.697 --rc geninfo_unexecuted_blocks=1 00:33:28.697 00:33:28.697 ' 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:28.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.697 --rc genhtml_branch_coverage=1 00:33:28.697 --rc genhtml_function_coverage=1 00:33:28.697 --rc genhtml_legend=1 00:33:28.697 --rc geninfo_all_blocks=1 00:33:28.697 --rc geninfo_unexecuted_blocks=1 00:33:28.697 00:33:28.697 ' 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.697 14:53:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.698 14:53:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:33.972 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:33.972 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:33.972 Found net devices under 0000:31:00.0: cvl_0_0 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:33.972 Found net devices under 0000:31:00.1: cvl_0_1 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:33.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:33:33.972 00:33:33.972 --- 10.0.0.2 ping statistics --- 00:33:33.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.972 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:33:33.972 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:33:33.972 00:33:33.972 --- 10.0.0.1 ping statistics --- 00:33:33.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.972 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=6971 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 6971 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 6971 ']' 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.973 14:53:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:33.973 [2024-11-20 14:53:40.804226] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:33.973 [2024-11-20 14:53:40.805205] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:33:33.973 [2024-11-20 14:53:40.805249] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.973 [2024-11-20 14:53:40.888848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:33.973 [2024-11-20 14:53:40.924829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.973 [2024-11-20 14:53:40.924860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.973 [2024-11-20 14:53:40.924868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.973 [2024-11-20 14:53:40.924875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.973 [2024-11-20 14:53:40.924881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.973 [2024-11-20 14:53:40.926023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.973 [2024-11-20 14:53:40.926028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.973 [2024-11-20 14:53:40.982156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:33.973 [2024-11-20 14:53:40.982808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:33.973 [2024-11-20 14:53:40.982816] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:34.541 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.541 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:34.541 14:53:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:34.541 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.541 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:34.801 5000+0 records in 00:33:34.801 5000+0 records out 00:33:34.801 10240000 bytes (10 MB, 9.8 MiB) copied, 0.00879731 s, 1.2 GB/s 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:34.801 AIO0 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:34.801 [2024-11-20 14:53:41.666543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.801 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:34.802 [2024-11-20 14:53:41.694863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 6971 0 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 6971 0 idle 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=6971 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 6971 -w 256 00:33:34.802 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 6971 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.23 reactor_0' 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 6971 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.23 reactor_0 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 6971 1 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 6971 1 idle 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=6971 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:35.061 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:35.062 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 6971 -w 256 00:33:35.062 14:53:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 6978 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 6978 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=7339 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 6971 0 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 6971 0 busy 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=6971 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 6971 -w 256 00:33:35.062 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:35.322 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 6971 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.24 reactor_0' 00:33:35.322 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:35.322 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 6971 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.24 reactor_0 00:33:35.322 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:35.322 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:35.322 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:35.322 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:35.322 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:35.322 14:53:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:33:36.260 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:33:36.260 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:36.260 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 6971 -w 256 00:33:36.260 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 6971 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.53 reactor_0' 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 6971 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.53 reactor_0 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 6971 1 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 6971 1 busy 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=6971 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:36.519 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 6971 -w 256 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 6978 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.32 reactor_1' 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 6978 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.32 reactor_1 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:36.520 14:53:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 7339 00:33:46.504 Initializing NVMe Controllers 00:33:46.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:46.504 Controller IO queue size 256, less than required. 00:33:46.504 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:46.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:46.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:46.504 Initialization complete. Launching workers. 00:33:46.504 ======================================================== 00:33:46.504 Latency(us) 00:33:46.504 Device Information : IOPS MiB/s Average min max 00:33:46.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 20772.40 81.14 12328.19 3424.59 20790.07 00:33:46.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 22304.30 87.13 11481.73 3327.69 19985.22 00:33:46.504 ======================================================== 00:33:46.504 Total : 43076.69 168.27 11889.91 3327.69 20790.07 00:33:46.504 00:33:46.504 [2024-11-20 14:53:52.261120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc69330 is same with the state(6) to be set 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 6971 0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 6971 0 idle 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=6971 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 6971 -w 256 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 6971 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.23 reactor_0' 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 6971 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.23 reactor_0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 6971 1 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 6971 1 idle 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=6971 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 6971 -w 256 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 6978 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 6978 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:46.504 14:53:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 6971 0 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 6971 0 idle 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=6971 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:48.407 14:53:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 6971 -w 256 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 6971 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.37 reactor_0' 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 6971 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.37 reactor_0 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 6971 1 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 6971 1 idle 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=6971 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 6971 -w 256 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 6978 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.05 reactor_1' 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 6978 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.05 reactor_1 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:48.407 14:53:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:48.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:48.667 14:53:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:48.667 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:48.667 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:48.667 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:48.667 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:48.667 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:48.667 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:48.668 rmmod nvme_tcp 00:33:48.668 rmmod nvme_fabrics 00:33:48.668 rmmod nvme_keyring 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 6971 ']' 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 6971 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 6971 ']' 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 6971 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 6971 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 6971' 00:33:48.668 killing process with pid 6971 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 6971 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 6971 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:48.668 14:53:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.207 14:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:51.207 00:33:51.207 real 0m22.467s 00:33:51.207 user 0m39.623s 00:33:51.207 sys 0m7.330s 00:33:51.207 14:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.207 14:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 ************************************ 00:33:51.207 END TEST nvmf_interrupt 00:33:51.207 ************************************ 00:33:51.207 00:33:51.207 real 26m9.323s 00:33:51.207 user 56m28.163s 00:33:51.207 sys 7m59.054s 00:33:51.207 14:53:57 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.207 14:53:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 ************************************ 00:33:51.207 END TEST nvmf_tcp 00:33:51.207 ************************************ 00:33:51.207 14:53:57 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:51.207 14:53:57 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:51.207 14:53:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:51.207 14:53:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.207 14:53:57 -- common/autotest_common.sh@10 -- # set +x 00:33:51.207 ************************************ 00:33:51.207 START TEST spdkcli_nvmf_tcp 00:33:51.207 ************************************ 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:51.207 * Looking for test storage... 00:33:51.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:51.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.207 --rc genhtml_branch_coverage=1 00:33:51.207 --rc genhtml_function_coverage=1 00:33:51.207 --rc genhtml_legend=1 00:33:51.207 --rc geninfo_all_blocks=1 00:33:51.207 --rc geninfo_unexecuted_blocks=1 00:33:51.207 00:33:51.207 ' 00:33:51.207 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:51.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.207 --rc genhtml_branch_coverage=1 00:33:51.207 --rc genhtml_function_coverage=1 00:33:51.207 --rc genhtml_legend=1 00:33:51.207 --rc geninfo_all_blocks=1 00:33:51.208 --rc geninfo_unexecuted_blocks=1 00:33:51.208 00:33:51.208 ' 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:51.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.208 --rc genhtml_branch_coverage=1 00:33:51.208 --rc genhtml_function_coverage=1 00:33:51.208 --rc genhtml_legend=1 00:33:51.208 --rc geninfo_all_blocks=1 00:33:51.208 --rc geninfo_unexecuted_blocks=1 00:33:51.208 00:33:51.208 ' 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:51.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.208 --rc genhtml_branch_coverage=1 00:33:51.208 --rc genhtml_function_coverage=1 00:33:51.208 --rc genhtml_legend=1 00:33:51.208 --rc geninfo_all_blocks=1 00:33:51.208 --rc geninfo_unexecuted_blocks=1 00:33:51.208 00:33:51.208 ' 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:51.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=10845 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 10845 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 10845 ']' 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.208 14:53:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:51.208 [2024-11-20 14:53:58.005464] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:33:51.208 [2024-11-20 14:53:58.005516] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid10845 ] 00:33:51.208 [2024-11-20 14:53:58.070746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:51.208 [2024-11-20 14:53:58.102271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.208 [2024-11-20 14:53:58.102274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.208 14:53:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:51.208 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:51.208 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:51.208 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:51.208 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:51.208 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:51.208 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:51.208 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:51.208 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:51.208 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:51.208 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:51.209 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:51.209 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.209 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:51.209 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:51.209 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:51.209 ' 00:33:53.881 [2024-11-20 14:54:00.615133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.818 [2024-11-20 14:54:01.842940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:57.350 [2024-11-20 14:54:04.133289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:59.255 [2024-11-20 14:54:06.094834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:00.634 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:00.634 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:00.634 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:00.634 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:00.634 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:00.634 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:00.634 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:00.634 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:00.634 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:00.634 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:00.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:00.634 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:00.893 14:54:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:00.894 14:54:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:00.894 14:54:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.894 14:54:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:00.894 14:54:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.894 14:54:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.894 14:54:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:00.894 14:54:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:01.153 14:54:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:01.154 14:54:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:01.154 14:54:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:01.154 14:54:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:01.154 14:54:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.154 14:54:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:01.154 14:54:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.154 14:54:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.154 14:54:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:01.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:01.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:01.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:01.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:01.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:01.154 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:01.154 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:01.154 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:01.154 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:01.154 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:01.154 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:01.154 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:01.154 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:01.154 ' 00:34:06.430 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:06.430 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:06.430 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:06.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:06.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:06.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:06.431 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:06.431 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:06.431 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:06.431 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:06.431 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:06.431 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:06.431 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:06.431 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 10845 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 10845 ']' 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 10845 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 10845 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 10845' 00:34:06.431 killing process with pid 10845 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 10845 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 10845 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 10845 ']' 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 10845 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 10845 ']' 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 10845 00:34:06.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (10845) - No such process 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 10845 is not found' 00:34:06.431 Process with pid 10845 is not found 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:06.431 00:34:06.431 real 0m15.647s 00:34:06.431 user 0m33.338s 00:34:06.431 sys 0m0.549s 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.431 14:54:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:06.431 ************************************ 00:34:06.431 END TEST spdkcli_nvmf_tcp 00:34:06.431 ************************************ 00:34:06.691 14:54:13 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:06.691 14:54:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:06.691 14:54:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.691 14:54:13 -- common/autotest_common.sh@10 -- # set +x 00:34:06.691 ************************************ 00:34:06.691 START TEST nvmf_identify_passthru 00:34:06.691 ************************************ 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:06.691 * Looking for test storage... 00:34:06.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:06.691 14:54:13 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:06.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.691 --rc genhtml_branch_coverage=1 00:34:06.691 --rc genhtml_function_coverage=1 00:34:06.691 --rc genhtml_legend=1 00:34:06.691 --rc geninfo_all_blocks=1 00:34:06.691 --rc geninfo_unexecuted_blocks=1 00:34:06.691 00:34:06.691 ' 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:06.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.691 --rc genhtml_branch_coverage=1 00:34:06.691 --rc genhtml_function_coverage=1 00:34:06.691 --rc genhtml_legend=1 00:34:06.691 --rc geninfo_all_blocks=1 00:34:06.691 --rc geninfo_unexecuted_blocks=1 00:34:06.691 00:34:06.691 ' 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:06.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.691 --rc genhtml_branch_coverage=1 00:34:06.691 --rc genhtml_function_coverage=1 00:34:06.691 --rc genhtml_legend=1 00:34:06.691 --rc geninfo_all_blocks=1 00:34:06.691 --rc geninfo_unexecuted_blocks=1 00:34:06.691 00:34:06.691 ' 00:34:06.691 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:06.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.691 --rc genhtml_branch_coverage=1 00:34:06.691 --rc genhtml_function_coverage=1 00:34:06.691 --rc genhtml_legend=1 00:34:06.691 --rc geninfo_all_blocks=1 00:34:06.691 --rc geninfo_unexecuted_blocks=1 00:34:06.691 00:34:06.691 ' 00:34:06.691 14:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.691 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:06.691 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.691 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.691 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.691 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.691 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.691 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.691 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.692 14:54:13 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.692 14:54:13 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.692 14:54:13 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.692 14:54:13 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:06.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.692 14:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.692 14:54:13 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.692 14:54:13 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.692 14:54:13 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.692 14:54:13 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:06.692 14:54:13 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.692 14:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.692 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:06.692 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:06.692 14:54:13 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:06.692 14:54:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.970 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.970 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.970 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.970 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.970 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.970 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:11.971 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:11.971 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:11.971 Found net devices under 0000:31:00.0: cvl_0_0 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:11.971 Found net devices under 0000:31:00.1: cvl_0_1 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.971 14:54:18 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:34:11.971 00:34:11.971 --- 10.0.0.2 ping statistics --- 00:34:11.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.971 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:34:11.971 00:34:11.971 --- 10.0.0.1 ping statistics --- 00:34:11.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.971 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.971 14:54:19 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:12.231 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.231 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:34:12.231 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:34:12.231 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:34:12.231 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:34:12.231 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:34:12.231 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:12.231 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:12.799 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:34:12.799 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:34:12.799 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:12.799 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:13.060 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:34:13.060 14:54:19 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:13.060 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:13.060 14:54:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.060 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:13.060 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.060 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.060 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=18819 00:34:13.060 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:13.060 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 18819 00:34:13.060 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 18819 ']' 00:34:13.060 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:13.060 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:13.060 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:13.060 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:13.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:13.060 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:13.060 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.060 [2024-11-20 14:54:20.066823] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:34:13.060 [2024-11-20 14:54:20.066874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:13.320 [2024-11-20 14:54:20.139642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:13.320 [2024-11-20 14:54:20.170397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:13.320 [2024-11-20 14:54:20.170427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:13.320 [2024-11-20 14:54:20.170433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:13.320 [2024-11-20 14:54:20.170439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:13.320 [2024-11-20 14:54:20.170445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:13.320 [2024-11-20 14:54:20.171760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.320 [2024-11-20 14:54:20.171871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:13.320 [2024-11-20 14:54:20.172031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.320 [2024-11-20 14:54:20.172033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:13.889 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.889 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:13.889 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:13.889 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.889 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.889 INFO: Log level set to 20 00:34:13.889 INFO: Requests: 00:34:13.889 { 00:34:13.889 "jsonrpc": "2.0", 00:34:13.889 "method": "nvmf_set_config", 00:34:13.889 "id": 1, 00:34:13.889 "params": { 00:34:13.889 "admin_cmd_passthru": { 00:34:13.889 "identify_ctrlr": true 00:34:13.889 } 00:34:13.889 } 00:34:13.889 } 00:34:13.889 00:34:13.889 INFO: response: 00:34:13.889 { 00:34:13.889 "jsonrpc": "2.0", 00:34:13.889 "id": 1, 00:34:13.889 "result": true 00:34:13.889 } 00:34:13.889 00:34:13.889 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.889 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:13.889 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.889 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.889 INFO: Setting log level to 20 00:34:13.890 INFO: Setting log level to 20 00:34:13.890 INFO: Log level set to 20 00:34:13.890 INFO: Log level set to 20 00:34:13.890 INFO: Requests: 00:34:13.890 { 00:34:13.890 "jsonrpc": "2.0", 00:34:13.890 "method": "framework_start_init", 00:34:13.890 "id": 1 00:34:13.890 } 00:34:13.890 00:34:13.890 INFO: Requests: 00:34:13.890 { 00:34:13.890 "jsonrpc": "2.0", 00:34:13.890 "method": "framework_start_init", 00:34:13.890 "id": 1 00:34:13.890 } 00:34:13.890 00:34:13.890 [2024-11-20 14:54:20.909001] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:13.890 INFO: response: 00:34:13.890 { 00:34:13.890 "jsonrpc": "2.0", 00:34:13.890 "id": 1, 00:34:13.890 "result": true 00:34:13.890 } 00:34:13.890 00:34:13.890 INFO: response: 00:34:13.890 { 00:34:13.890 "jsonrpc": "2.0", 00:34:13.890 "id": 1, 00:34:13.890 "result": true 00:34:13.890 } 00:34:13.890 00:34:13.890 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.890 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:13.890 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.890 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.890 INFO: Setting log level to 40 00:34:13.890 INFO: Setting log level to 40 00:34:13.890 INFO: Setting log level to 40 00:34:13.890 [2024-11-20 14:54:20.918038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:13.890 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.890 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:13.890 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:13.890 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.149 14:54:20 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:34:14.149 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.149 14:54:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.409 Nvme0n1 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.409 [2024-11-20 14:54:21.282538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.409 [ 00:34:14.409 { 00:34:14.409 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:14.409 "subtype": "Discovery", 00:34:14.409 "listen_addresses": [], 00:34:14.409 "allow_any_host": true, 00:34:14.409 "hosts": [] 00:34:14.409 }, 00:34:14.409 { 00:34:14.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:14.409 "subtype": "NVMe", 00:34:14.409 "listen_addresses": [ 00:34:14.409 { 00:34:14.409 "trtype": "TCP", 00:34:14.409 "adrfam": "IPv4", 00:34:14.409 "traddr": "10.0.0.2", 00:34:14.409 "trsvcid": "4420" 00:34:14.409 } 00:34:14.409 ], 00:34:14.409 "allow_any_host": true, 00:34:14.409 "hosts": [], 00:34:14.409 "serial_number": "SPDK00000000000001", 00:34:14.409 "model_number": "SPDK bdev Controller", 00:34:14.409 "max_namespaces": 1, 00:34:14.409 "min_cntlid": 1, 00:34:14.409 "max_cntlid": 65519, 00:34:14.409 "namespaces": [ 00:34:14.409 { 00:34:14.409 "nsid": 1, 00:34:14.409 "bdev_name": "Nvme0n1", 00:34:14.409 "name": "Nvme0n1", 00:34:14.409 "nguid": "363447305260549900253845000000A3", 00:34:14.409 "uuid": "36344730-5260-5499-0025-3845000000a3" 00:34:14.409 } 00:34:14.409 ] 00:34:14.409 } 00:34:14.409 ] 00:34:14.409 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:14.409 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:14.669 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:34:14.669 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:34:14.669 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:34:14.669 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:14.669 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.669 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.669 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.669 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:14.669 14:54:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:14.669 rmmod nvme_tcp 00:34:14.669 rmmod nvme_fabrics 00:34:14.669 rmmod nvme_keyring 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 18819 ']' 00:34:14.669 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 18819 00:34:14.669 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 18819 ']' 00:34:14.669 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 18819 00:34:14.669 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:14.669 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.929 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 18819 00:34:14.929 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.929 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.929 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 18819' 00:34:14.929 killing process with pid 18819 00:34:14.929 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 18819 00:34:14.929 14:54:21 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 18819 00:34:15.190 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.190 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.190 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.190 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:15.190 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.190 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:15.190 14:54:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.190 14:54:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.190 14:54:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.190 14:54:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.190 14:54:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:15.190 14:54:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.098 14:54:24 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.098 00:34:17.098 real 0m10.529s 00:34:17.098 user 0m8.741s 00:34:17.098 sys 0m4.834s 00:34:17.098 14:54:24 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.098 14:54:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:17.098 ************************************ 00:34:17.098 END TEST nvmf_identify_passthru 00:34:17.098 ************************************ 00:34:17.098 14:54:24 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:17.098 14:54:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:17.098 14:54:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.098 14:54:24 -- common/autotest_common.sh@10 -- # set +x 00:34:17.098 ************************************ 00:34:17.098 START TEST nvmf_dif 00:34:17.098 ************************************ 00:34:17.098 14:54:24 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:17.098 * Looking for test storage... 00:34:17.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:17.358 14:54:24 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:17.358 14:54:24 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:34:17.358 14:54:24 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:17.358 14:54:24 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:17.358 14:54:24 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.358 14:54:24 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:17.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.358 --rc genhtml_branch_coverage=1 00:34:17.358 --rc genhtml_function_coverage=1 00:34:17.358 --rc genhtml_legend=1 00:34:17.358 --rc geninfo_all_blocks=1 00:34:17.358 --rc geninfo_unexecuted_blocks=1 00:34:17.358 00:34:17.358 ' 00:34:17.358 14:54:24 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:17.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.358 --rc genhtml_branch_coverage=1 00:34:17.358 --rc genhtml_function_coverage=1 00:34:17.358 --rc genhtml_legend=1 00:34:17.358 --rc geninfo_all_blocks=1 00:34:17.358 --rc geninfo_unexecuted_blocks=1 00:34:17.358 00:34:17.358 ' 00:34:17.358 14:54:24 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:17.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.358 --rc genhtml_branch_coverage=1 00:34:17.358 --rc genhtml_function_coverage=1 00:34:17.358 --rc genhtml_legend=1 00:34:17.358 --rc geninfo_all_blocks=1 00:34:17.358 --rc geninfo_unexecuted_blocks=1 00:34:17.358 00:34:17.358 ' 00:34:17.358 14:54:24 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:17.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.358 --rc genhtml_branch_coverage=1 00:34:17.358 --rc genhtml_function_coverage=1 00:34:17.358 --rc genhtml_legend=1 00:34:17.358 --rc geninfo_all_blocks=1 00:34:17.358 --rc geninfo_unexecuted_blocks=1 00:34:17.358 00:34:17.358 ' 00:34:17.358 14:54:24 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.358 14:54:24 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.358 14:54:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.358 14:54:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.358 14:54:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.358 14:54:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:17.358 14:54:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.358 14:54:24 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:17.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:17.359 14:54:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:17.359 14:54:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:17.359 14:54:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:17.359 14:54:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:17.359 14:54:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.359 14:54:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:17.359 14:54:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:17.359 14:54:24 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:17.359 14:54:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:22.637 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:22.637 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:22.637 Found net devices under 0000:31:00.0: cvl_0_0 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:22.637 Found net devices under 0000:31:00.1: cvl_0_1 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:34:22.637 00:34:22.637 --- 10.0.0.2 ping statistics --- 00:34:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.637 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:34:22.637 14:54:29 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:34:22.637 00:34:22.637 --- 10.0.0.1 ping statistics --- 00:34:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.637 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:34:22.638 14:54:29 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.638 14:54:29 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:22.638 14:54:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:22.638 14:54:29 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:25.177 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:25.177 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.177 14:54:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:25.177 14:54:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:25.177 14:54:31 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.177 14:54:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=25122 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 25122 00:34:25.177 14:54:31 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 25122 ']' 00:34:25.177 14:54:31 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.177 14:54:31 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.177 14:54:31 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.177 14:54:31 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.177 14:54:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:25.177 14:54:31 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:25.177 [2024-11-20 14:54:31.949198] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:34:25.177 [2024-11-20 14:54:31.949263] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.177 [2024-11-20 14:54:32.033479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.177 [2024-11-20 14:54:32.068998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.177 [2024-11-20 14:54:32.069029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.177 [2024-11-20 14:54:32.069037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.177 [2024-11-20 14:54:32.069043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.177 [2024-11-20 14:54:32.069050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.177 [2024-11-20 14:54:32.069627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:25.752 14:54:32 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:25.752 14:54:32 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.752 14:54:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:25.752 14:54:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:25.752 [2024-11-20 14:54:32.759631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.752 14:54:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.752 14:54:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:25.752 ************************************ 00:34:25.752 START TEST fio_dif_1_default 00:34:25.752 ************************************ 00:34:25.752 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:25.752 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:25.752 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:25.752 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:25.752 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:25.752 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:25.753 bdev_null0 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.753 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:26.038 [2024-11-20 14:54:32.815909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:26.038 { 00:34:26.038 "params": { 00:34:26.038 "name": "Nvme$subsystem", 00:34:26.038 "trtype": "$TEST_TRANSPORT", 00:34:26.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.038 "adrfam": "ipv4", 00:34:26.038 "trsvcid": "$NVMF_PORT", 00:34:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.038 "hdgst": ${hdgst:-false}, 00:34:26.038 "ddgst": ${ddgst:-false} 00:34:26.038 }, 00:34:26.038 "method": "bdev_nvme_attach_controller" 00:34:26.038 } 00:34:26.038 EOF 00:34:26.038 )") 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:26.038 "params": { 00:34:26.038 "name": "Nvme0", 00:34:26.038 "trtype": "tcp", 00:34:26.038 "traddr": "10.0.0.2", 00:34:26.038 "adrfam": "ipv4", 00:34:26.038 "trsvcid": "4420", 00:34:26.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:26.038 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:26.038 "hdgst": false, 00:34:26.038 "ddgst": false 00:34:26.038 }, 00:34:26.038 "method": "bdev_nvme_attach_controller" 00:34:26.038 }' 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:26.038 14:54:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.301 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:26.301 fio-3.35 00:34:26.301 Starting 1 thread 00:34:38.516 00:34:38.516 filename0: (groupid=0, jobs=1): err= 0: pid=25643: Wed Nov 20 14:54:43 2024 00:34:38.516 read: IOPS=265, BW=1061KiB/s (1087kB/s)(10.4MiB/10011msec) 00:34:38.516 slat (nsec): min=4195, max=42528, avg=6057.12, stdev=1290.03 00:34:38.516 clat (usec): min=529, max=47262, avg=15059.42, stdev=19276.18 00:34:38.516 lat (usec): min=534, max=47281, avg=15065.48, stdev=19275.88 00:34:38.516 clat percentiles (usec): 00:34:38.516 | 1.00th=[ 586], 5.00th=[ 717], 10.00th=[ 775], 20.00th=[ 824], 00:34:38.516 | 30.00th=[ 848], 40.00th=[ 881], 50.00th=[ 906], 60.00th=[ 938], 00:34:38.516 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:38.516 | 99.00th=[41681], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:34:38.516 | 99.99th=[47449] 00:34:38.516 bw ( KiB/s): min= 704, max= 5344, per=99.88%, avg=1060.80, stdev=1057.16, samples=20 00:34:38.516 iops : min= 176, max= 1336, avg=265.20, stdev=264.29, samples=20 00:34:38.516 lat (usec) : 750=6.59%, 1000=58.02% 00:34:38.516 lat (msec) : 2=0.15%, 50=35.24% 00:34:38.516 cpu : usr=93.65%, sys=6.13%, ctx=13, majf=0, minf=252 00:34:38.516 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.516 issued rwts: total=2656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.516 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:38.516 00:34:38.516 Run status group 0 (all jobs): 00:34:38.516 READ: bw=1061KiB/s (1087kB/s), 1061KiB/s-1061KiB/s (1087kB/s-1087kB/s), io=10.4MiB (10.9MB), run=10011-10011msec 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 00:34:38.516 real 0m11.107s 00:34:38.516 user 0m25.602s 00:34:38.516 sys 0m0.898s 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 ************************************ 00:34:38.516 END TEST fio_dif_1_default 00:34:38.516 ************************************ 00:34:38.516 14:54:43 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:38.516 14:54:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:38.516 14:54:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 ************************************ 00:34:38.516 START TEST fio_dif_1_multi_subsystems 00:34:38.516 ************************************ 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 bdev_null0 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 [2024-11-20 14:54:43.977857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 bdev_null1 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:38.516 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:38.517 { 00:34:38.517 "params": { 00:34:38.517 "name": "Nvme$subsystem", 00:34:38.517 "trtype": "$TEST_TRANSPORT", 00:34:38.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:38.517 "adrfam": "ipv4", 00:34:38.517 "trsvcid": "$NVMF_PORT", 00:34:38.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:38.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:38.517 "hdgst": ${hdgst:-false}, 00:34:38.517 "ddgst": ${ddgst:-false} 00:34:38.517 }, 00:34:38.517 "method": "bdev_nvme_attach_controller" 00:34:38.517 } 00:34:38.517 EOF 00:34:38.517 )") 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:38.517 { 00:34:38.517 "params": { 00:34:38.517 "name": "Nvme$subsystem", 00:34:38.517 "trtype": "$TEST_TRANSPORT", 00:34:38.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:38.517 "adrfam": "ipv4", 00:34:38.517 "trsvcid": "$NVMF_PORT", 00:34:38.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:38.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:38.517 "hdgst": ${hdgst:-false}, 00:34:38.517 "ddgst": ${ddgst:-false} 00:34:38.517 }, 00:34:38.517 "method": "bdev_nvme_attach_controller" 00:34:38.517 } 00:34:38.517 EOF 00:34:38.517 )") 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:38.517 "params": { 00:34:38.517 "name": "Nvme0", 00:34:38.517 "trtype": "tcp", 00:34:38.517 "traddr": "10.0.0.2", 00:34:38.517 "adrfam": "ipv4", 00:34:38.517 "trsvcid": "4420", 00:34:38.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:38.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:38.517 "hdgst": false, 00:34:38.517 "ddgst": false 00:34:38.517 }, 00:34:38.517 "method": "bdev_nvme_attach_controller" 00:34:38.517 },{ 00:34:38.517 "params": { 00:34:38.517 "name": "Nvme1", 00:34:38.517 "trtype": "tcp", 00:34:38.517 "traddr": "10.0.0.2", 00:34:38.517 "adrfam": "ipv4", 00:34:38.517 "trsvcid": "4420", 00:34:38.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:38.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:38.517 "hdgst": false, 00:34:38.517 "ddgst": false 00:34:38.517 }, 00:34:38.517 "method": "bdev_nvme_attach_controller" 00:34:38.517 }' 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:38.517 14:54:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.517 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:38.517 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:38.517 fio-3.35 00:34:38.517 Starting 2 threads 00:34:48.502 00:34:48.502 filename0: (groupid=0, jobs=1): err= 0: pid=28167: Wed Nov 20 14:54:55 2024 00:34:48.502 read: IOPS=192, BW=771KiB/s (789kB/s)(7712KiB/10008msec) 00:34:48.502 slat (nsec): min=3295, max=47364, avg=5950.48, stdev=1621.44 00:34:48.502 clat (usec): min=386, max=42347, avg=20745.12, stdev=20189.08 00:34:48.502 lat (usec): min=391, max=42353, avg=20751.07, stdev=20188.78 00:34:48.502 clat percentiles (usec): 00:34:48.502 | 1.00th=[ 570], 5.00th=[ 685], 10.00th=[ 775], 20.00th=[ 816], 00:34:48.502 | 30.00th=[ 840], 40.00th=[ 857], 50.00th=[ 1057], 60.00th=[41157], 00:34:48.502 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:48.502 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:48.502 | 99.99th=[42206] 00:34:48.502 bw ( KiB/s): min= 704, max= 832, per=66.22%, avg=769.60, stdev=26.42, samples=20 00:34:48.502 iops : min= 176, max= 208, avg=192.40, stdev= 6.60, samples=20 00:34:48.502 lat (usec) : 500=0.31%, 750=7.52%, 1000=41.55% 00:34:48.502 lat (msec) : 2=1.24%, 50=49.38% 00:34:48.502 cpu : usr=95.75%, sys=4.02%, ctx=14, majf=0, minf=250 00:34:48.502 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.502 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.502 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:48.502 filename1: (groupid=0, jobs=1): err= 0: pid=28168: Wed Nov 20 14:54:55 2024 00:34:48.502 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10016msec) 00:34:48.502 slat (nsec): min=4328, max=22935, avg=6286.08, stdev=1722.09 00:34:48.502 clat (usec): min=675, max=43165, avg=40860.34, stdev=2583.42 00:34:48.502 lat (usec): min=681, max=43181, avg=40866.63, stdev=2583.49 00:34:48.502 clat percentiles (usec): 00:34:48.502 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:48.502 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:48.502 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:48.502 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:34:48.502 | 99.99th=[43254] 00:34:48.502 bw ( KiB/s): min= 384, max= 416, per=33.58%, avg=390.40, stdev=13.13, samples=20 00:34:48.502 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:34:48.502 lat (usec) : 750=0.41% 00:34:48.502 lat (msec) : 50=99.59% 00:34:48.502 cpu : usr=95.24%, sys=4.53%, ctx=12, majf=0, minf=75 00:34:48.502 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.502 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.502 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:48.502 00:34:48.502 Run status group 0 (all jobs): 00:34:48.502 READ: bw=1161KiB/s (1189kB/s), 391KiB/s-771KiB/s (401kB/s-789kB/s), io=11.4MiB (11.9MB), run=10008-10016msec 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.502 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.503 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:48.503 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.503 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.503 00:34:48.503 real 0m11.429s 00:34:48.503 user 0m32.256s 00:34:48.503 sys 0m1.155s 00:34:48.503 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:48.503 14:54:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 ************************************ 00:34:48.503 END TEST fio_dif_1_multi_subsystems 00:34:48.503 ************************************ 00:34:48.503 14:54:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:48.503 14:54:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:48.503 14:54:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:48.503 14:54:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 ************************************ 00:34:48.503 START TEST fio_dif_rand_params 00:34:48.503 ************************************ 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 bdev_null0 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 [2024-11-20 14:54:55.456050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:48.503 { 00:34:48.503 "params": { 00:34:48.503 "name": "Nvme$subsystem", 00:34:48.503 "trtype": "$TEST_TRANSPORT", 00:34:48.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:48.503 "adrfam": "ipv4", 00:34:48.503 "trsvcid": "$NVMF_PORT", 00:34:48.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:48.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:48.503 "hdgst": ${hdgst:-false}, 00:34:48.503 "ddgst": ${ddgst:-false} 00:34:48.503 }, 00:34:48.503 "method": "bdev_nvme_attach_controller" 00:34:48.503 } 00:34:48.503 EOF 00:34:48.503 )") 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:48.503 "params": { 00:34:48.503 "name": "Nvme0", 00:34:48.503 "trtype": "tcp", 00:34:48.503 "traddr": "10.0.0.2", 00:34:48.503 "adrfam": "ipv4", 00:34:48.503 "trsvcid": "4420", 00:34:48.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:48.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:48.503 "hdgst": false, 00:34:48.503 "ddgst": false 00:34:48.503 }, 00:34:48.503 "method": "bdev_nvme_attach_controller" 00:34:48.503 }' 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:48.503 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:48.504 14:54:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:49.073 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:49.073 ... 00:34:49.073 fio-3.35 00:34:49.073 Starting 3 threads 00:34:55.651 00:34:55.651 filename0: (groupid=0, jobs=1): err= 0: pid=30839: Wed Nov 20 14:55:01 2024 00:34:55.651 read: IOPS=181, BW=22.6MiB/s (23.7MB/s)(113MiB/5011msec) 00:34:55.651 slat (nsec): min=4127, max=23883, avg=6747.23, stdev=1398.79 00:34:55.651 clat (msec): min=4, max=130, avg=16.56, stdev=21.39 00:34:55.651 lat (msec): min=4, max=130, avg=16.57, stdev=21.39 00:34:55.651 clat percentiles (msec): 00:34:55.651 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:34:55.651 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:34:55.651 | 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 49], 95.00th=[ 52], 00:34:55.651 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 131], 99.95th=[ 131], 00:34:55.651 | 99.99th=[ 131] 00:34:55.651 bw ( KiB/s): min=12800, max=43776, per=24.23%, avg=23142.40, stdev=10313.13, samples=10 00:34:55.651 iops : min= 100, max= 342, avg=180.80, stdev=80.57, samples=10 00:34:55.651 lat (msec) : 10=74.42%, 20=8.27%, 50=10.92%, 100=5.95%, 250=0.44% 00:34:55.651 cpu : usr=95.95%, sys=3.67%, ctx=190, majf=0, minf=78 00:34:55.651 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.651 issued rwts: total=907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.651 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:55.651 filename0: (groupid=0, jobs=1): err= 0: pid=30841: Wed Nov 20 14:55:01 2024 00:34:55.651 read: IOPS=237, BW=29.6MiB/s (31.1MB/s)(149MiB/5014msec) 00:34:55.651 slat (nsec): min=4544, max=20060, avg=6441.73, stdev=1121.02 00:34:55.651 clat (usec): min=3441, max=90950, avg=12640.42, stdev=16533.06 00:34:55.651 lat (usec): min=3446, max=90959, avg=12646.86, stdev=16533.23 00:34:55.651 clat percentiles (usec): 00:34:55.651 | 1.00th=[ 3949], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 5342], 00:34:55.651 | 30.00th=[ 5669], 40.00th=[ 5997], 50.00th=[ 6390], 60.00th=[ 6849], 00:34:55.651 | 70.00th=[ 7373], 80.00th=[ 8029], 90.00th=[46924], 95.00th=[48497], 00:34:55.651 | 99.00th=[88605], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:34:55.651 | 99.99th=[90702] 00:34:55.651 bw ( KiB/s): min=19200, max=51712, per=31.78%, avg=30361.60, stdev=11860.54, samples=10 00:34:55.651 iops : min= 150, max= 404, avg=237.20, stdev=92.66, samples=10 00:34:55.651 lat (msec) : 4=1.26%, 10=84.69%, 50=11.77%, 100=2.27% 00:34:55.651 cpu : usr=96.97%, sys=2.79%, ctx=10, majf=0, minf=70 00:34:55.651 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.651 issued rwts: total=1189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.651 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:55.651 filename0: (groupid=0, jobs=1): err= 0: pid=30842: Wed Nov 20 14:55:01 2024 00:34:55.651 read: IOPS=330, BW=41.4MiB/s (43.4MB/s)(209MiB/5045msec) 00:34:55.651 slat (nsec): min=4397, max=23849, avg=6202.42, stdev=978.82 00:34:55.651 clat (usec): min=3448, max=87849, avg=9034.70, stdev=9462.28 00:34:55.651 lat (usec): min=3454, max=87858, avg=9040.90, stdev=9462.38 00:34:55.651 clat percentiles (usec): 00:34:55.651 | 1.00th=[ 4228], 5.00th=[ 4752], 10.00th=[ 5080], 20.00th=[ 5669], 00:34:55.651 | 30.00th=[ 6128], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7439], 00:34:55.651 | 70.00th=[ 8094], 80.00th=[ 8848], 90.00th=[10421], 95.00th=[11863], 00:34:55.651 | 99.00th=[48497], 99.50th=[50070], 99.90th=[87557], 99.95th=[87557], 00:34:55.651 | 99.99th=[87557] 00:34:55.651 bw ( KiB/s): min=29242, max=59136, per=44.68%, avg=42681.00, stdev=10440.10, samples=10 00:34:55.651 iops : min= 228, max= 462, avg=333.40, stdev=81.63, samples=10 00:34:55.651 lat (msec) : 4=0.42%, 10=87.48%, 20=7.73%, 50=3.89%, 100=0.48% 00:34:55.651 cpu : usr=96.21%, sys=3.57%, ctx=13, majf=0, minf=128 00:34:55.651 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.651 issued rwts: total=1669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.651 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:55.651 00:34:55.651 Run status group 0 (all jobs): 00:34:55.651 READ: bw=93.3MiB/s (97.8MB/s), 22.6MiB/s-41.4MiB/s (23.7MB/s-43.4MB/s), io=471MiB (493MB), run=5011-5045msec 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.651 bdev_null0 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.651 [2024-11-20 14:55:01.602865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.651 bdev_null1 00:34:55.651 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.652 bdev_null2 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.652 { 00:34:55.652 "params": { 00:34:55.652 "name": "Nvme$subsystem", 00:34:55.652 "trtype": "$TEST_TRANSPORT", 00:34:55.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.652 "adrfam": "ipv4", 00:34:55.652 "trsvcid": "$NVMF_PORT", 00:34:55.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.652 "hdgst": ${hdgst:-false}, 00:34:55.652 "ddgst": ${ddgst:-false} 00:34:55.652 }, 00:34:55.652 "method": "bdev_nvme_attach_controller" 00:34:55.652 } 00:34:55.652 EOF 00:34:55.652 )") 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.652 { 00:34:55.652 "params": { 00:34:55.652 "name": "Nvme$subsystem", 00:34:55.652 "trtype": "$TEST_TRANSPORT", 00:34:55.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.652 "adrfam": "ipv4", 00:34:55.652 "trsvcid": "$NVMF_PORT", 00:34:55.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.652 "hdgst": ${hdgst:-false}, 00:34:55.652 "ddgst": ${ddgst:-false} 00:34:55.652 }, 00:34:55.652 "method": "bdev_nvme_attach_controller" 00:34:55.652 } 00:34:55.652 EOF 00:34:55.652 )") 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.652 { 00:34:55.652 "params": { 00:34:55.652 "name": "Nvme$subsystem", 00:34:55.652 "trtype": "$TEST_TRANSPORT", 00:34:55.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.652 "adrfam": "ipv4", 00:34:55.652 "trsvcid": "$NVMF_PORT", 00:34:55.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.652 "hdgst": ${hdgst:-false}, 00:34:55.652 "ddgst": ${ddgst:-false} 00:34:55.652 }, 00:34:55.652 "method": "bdev_nvme_attach_controller" 00:34:55.652 } 00:34:55.652 EOF 00:34:55.652 )") 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:55.652 14:55:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:55.652 "params": { 00:34:55.652 "name": "Nvme0", 00:34:55.652 "trtype": "tcp", 00:34:55.652 "traddr": "10.0.0.2", 00:34:55.652 "adrfam": "ipv4", 00:34:55.652 "trsvcid": "4420", 00:34:55.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:55.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:55.652 "hdgst": false, 00:34:55.652 "ddgst": false 00:34:55.652 }, 00:34:55.653 "method": "bdev_nvme_attach_controller" 00:34:55.653 },{ 00:34:55.653 "params": { 00:34:55.653 "name": "Nvme1", 00:34:55.653 "trtype": "tcp", 00:34:55.653 "traddr": "10.0.0.2", 00:34:55.653 "adrfam": "ipv4", 00:34:55.653 "trsvcid": "4420", 00:34:55.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:55.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:55.653 "hdgst": false, 00:34:55.653 "ddgst": false 00:34:55.653 }, 00:34:55.653 "method": "bdev_nvme_attach_controller" 00:34:55.653 },{ 00:34:55.653 "params": { 00:34:55.653 "name": "Nvme2", 00:34:55.653 "trtype": "tcp", 00:34:55.653 "traddr": "10.0.0.2", 00:34:55.653 "adrfam": "ipv4", 00:34:55.653 "trsvcid": "4420", 00:34:55.653 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:55.653 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:55.653 "hdgst": false, 00:34:55.653 "ddgst": false 00:34:55.653 }, 00:34:55.653 "method": "bdev_nvme_attach_controller" 00:34:55.653 }' 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:55.653 14:55:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.653 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:55.653 ... 00:34:55.653 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:55.653 ... 00:34:55.653 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:55.653 ... 00:34:55.653 fio-3.35 00:34:55.653 Starting 24 threads 00:35:07.861 00:35:07.861 filename0: (groupid=0, jobs=1): err= 0: pid=32498: Wed Nov 20 14:55:13 2024 00:35:07.861 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.3MiB/10050msec) 00:35:07.861 slat (nsec): min=4026, max=99527, avg=14042.34, stdev=10525.60 00:35:07.861 clat (usec): min=8664, max=72859, avg=23684.56, stdev=3198.87 00:35:07.861 lat (usec): min=8672, max=72866, avg=23698.60, stdev=3199.92 00:35:07.861 clat percentiles (usec): 00:35:07.861 | 1.00th=[12911], 5.00th=[16581], 10.00th=[22676], 20.00th=[23725], 00:35:07.861 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.861 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:35:07.861 | 99.00th=[32113], 99.50th=[35390], 99.90th=[42730], 99.95th=[72877], 00:35:07.861 | 99.99th=[72877] 00:35:07.861 bw ( KiB/s): min= 2560, max= 3328, per=4.23%, avg=2692.00, stdev=163.24, samples=20 00:35:07.861 iops : min= 640, max= 832, avg=673.00, stdev=40.81, samples=20 00:35:07.861 lat (msec) : 10=0.09%, 20=8.60%, 50=91.23%, 100=0.09% 00:35:07.861 cpu : usr=97.45%, sys=1.54%, ctx=958, majf=0, minf=77 00:35:07.861 IO depths : 1=5.0%, 2=10.4%, 4=22.8%, 8=54.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:35:07.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.861 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.861 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.861 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.862 filename0: (groupid=0, jobs=1): err= 0: pid=32499: Wed Nov 20 14:55:13 2024 00:35:07.862 read: IOPS=664, BW=2658KiB/s (2722kB/s)(26.0MiB/10013msec) 00:35:07.862 slat (nsec): min=4148, max=91380, avg=22240.55, stdev=15251.24 00:35:07.862 clat (usec): min=13021, max=37695, avg=23868.29, stdev=1772.75 00:35:07.862 lat (usec): min=13053, max=37711, avg=23890.53, stdev=1774.31 00:35:07.862 clat percentiles (usec): 00:35:07.862 | 1.00th=[16319], 5.00th=[21103], 10.00th=[23462], 20.00th=[23725], 00:35:07.862 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.862 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:35:07.862 | 99.00th=[28181], 99.50th=[29754], 99.90th=[33817], 99.95th=[37487], 00:35:07.862 | 99.99th=[37487] 00:35:07.862 bw ( KiB/s): min= 2560, max= 2928, per=4.17%, avg=2655.20, stdev=100.99, samples=20 00:35:07.862 iops : min= 640, max= 732, avg=663.80, stdev=25.25, samples=20 00:35:07.862 lat (msec) : 20=4.13%, 50=95.87% 00:35:07.862 cpu : usr=98.88%, sys=0.82%, ctx=72, majf=0, minf=37 00:35:07.862 IO depths : 1=5.5%, 2=11.0%, 4=22.6%, 8=53.6%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:07.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 issued rwts: total=6654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.862 filename0: (groupid=0, jobs=1): err= 0: pid=32500: Wed Nov 20 14:55:13 2024 00:35:07.862 read: IOPS=685, BW=2740KiB/s (2806kB/s)(26.8MiB/10017msec) 00:35:07.862 slat (nsec): min=5495, max=94103, avg=13714.19, stdev=11902.22 00:35:07.862 clat (usec): min=9353, max=41155, avg=23260.89, stdev=3933.79 00:35:07.862 lat (usec): min=9421, max=41188, avg=23274.60, stdev=3935.30 00:35:07.862 clat percentiles (usec): 00:35:07.862 | 1.00th=[14353], 5.00th=[16057], 10.00th=[17171], 20.00th=[20579], 00:35:07.862 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:35:07.862 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[29492], 00:35:07.862 | 99.00th=[36963], 99.50th=[38536], 99.90th=[40633], 99.95th=[41157], 00:35:07.862 | 99.99th=[41157] 00:35:07.862 bw ( KiB/s): min= 2560, max= 2928, per=4.30%, avg=2739.15, stdev=103.16, samples=20 00:35:07.862 iops : min= 640, max= 732, avg=684.75, stdev=25.82, samples=20 00:35:07.862 lat (msec) : 10=0.23%, 20=17.78%, 50=81.99% 00:35:07.862 cpu : usr=98.75%, sys=0.96%, ctx=19, majf=0, minf=37 00:35:07.862 IO depths : 1=2.7%, 2=5.3%, 4=13.3%, 8=67.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:35:07.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 complete : 0=0.0%, 4=91.0%, 8=4.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 issued rwts: total=6862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.862 filename0: (groupid=0, jobs=1): err= 0: pid=32501: Wed Nov 20 14:55:13 2024 00:35:07.862 read: IOPS=652, BW=2611KiB/s (2674kB/s)(25.5MiB/10002msec) 00:35:07.862 slat (nsec): min=4135, max=98885, avg=25295.31, stdev=17013.49 00:35:07.862 clat (usec): min=13315, max=49610, avg=24267.53, stdev=2384.15 00:35:07.862 lat (usec): min=13344, max=49624, avg=24292.83, stdev=2383.81 00:35:07.862 clat percentiles (usec): 00:35:07.862 | 1.00th=[16057], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:35:07.862 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.862 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25035], 95.00th=[25822], 00:35:07.862 | 99.00th=[33817], 99.50th=[34866], 99.90th=[43779], 99.95th=[43779], 00:35:07.862 | 99.99th=[49546] 00:35:07.862 bw ( KiB/s): min= 2432, max= 2832, per=4.09%, avg=2601.26, stdev=106.97, samples=19 00:35:07.862 iops : min= 608, max= 708, avg=650.32, stdev=26.74, samples=19 00:35:07.862 lat (msec) : 20=2.53%, 50=97.47% 00:35:07.862 cpu : usr=99.18%, sys=0.54%, ctx=15, majf=0, minf=77 00:35:07.862 IO depths : 1=5.7%, 2=11.7%, 4=24.1%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:07.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 issued rwts: total=6530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.862 filename0: (groupid=0, jobs=1): err= 0: pid=32502: Wed Nov 20 14:55:13 2024 00:35:07.862 read: IOPS=676, BW=2708KiB/s (2773kB/s)(26.5MiB/10004msec) 00:35:07.862 slat (nsec): min=5655, max=88302, avg=8764.28, stdev=6731.27 00:35:07.862 clat (usec): min=8238, max=36905, avg=23567.05, stdev=3137.73 00:35:07.862 lat (usec): min=8247, max=36912, avg=23575.81, stdev=3137.52 00:35:07.862 clat percentiles (usec): 00:35:07.862 | 1.00th=[11731], 5.00th=[16712], 10.00th=[19006], 20.00th=[23725], 00:35:07.862 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.862 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25822], 00:35:07.862 | 99.00th=[31589], 99.50th=[32637], 99.90th=[35914], 99.95th=[35914], 00:35:07.862 | 99.99th=[36963] 00:35:07.862 bw ( KiB/s): min= 2560, max= 2960, per=4.25%, avg=2703.15, stdev=111.19, samples=20 00:35:07.862 iops : min= 640, max= 740, avg=675.75, stdev=27.77, samples=20 00:35:07.862 lat (msec) : 10=0.62%, 20=10.06%, 50=89.32% 00:35:07.862 cpu : usr=98.98%, sys=0.73%, ctx=11, majf=0, minf=68 00:35:07.862 IO depths : 1=3.9%, 2=8.8%, 4=20.8%, 8=57.7%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:07.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 complete : 0=0.0%, 4=93.1%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 issued rwts: total=6772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.862 filename0: (groupid=0, jobs=1): err= 0: pid=32503: Wed Nov 20 14:55:13 2024 00:35:07.862 read: IOPS=657, BW=2630KiB/s (2693kB/s)(25.7MiB/10002msec) 00:35:07.862 slat (usec): min=4, max=102, avg=19.21, stdev=14.36 00:35:07.862 clat (usec): min=13256, max=49591, avg=24161.59, stdev=1313.63 00:35:07.862 lat (usec): min=13262, max=49604, avg=24180.80, stdev=1312.36 00:35:07.862 clat percentiles (usec): 00:35:07.862 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:35:07.862 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.862 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:35:07.862 | 99.00th=[25822], 99.50th=[25822], 99.90th=[43779], 99.95th=[43779], 00:35:07.862 | 99.99th=[49546] 00:35:07.862 bw ( KiB/s): min= 2432, max= 2688, per=4.12%, avg=2620.63, stdev=78.31, samples=19 00:35:07.862 iops : min= 608, max= 672, avg=655.16, stdev=19.58, samples=19 00:35:07.862 lat (msec) : 20=0.30%, 50=99.70% 00:35:07.862 cpu : usr=98.66%, sys=0.85%, ctx=129, majf=0, minf=38 00:35:07.862 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:07.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.862 filename0: (groupid=0, jobs=1): err= 0: pid=32504: Wed Nov 20 14:55:13 2024 00:35:07.862 read: IOPS=676, BW=2707KiB/s (2772kB/s)(26.5MiB/10011msec) 00:35:07.862 slat (nsec): min=4040, max=94046, avg=12940.91, stdev=11717.73 00:35:07.862 clat (usec): min=10775, max=41507, avg=23547.08, stdev=3622.71 00:35:07.862 lat (usec): min=10782, max=41515, avg=23560.03, stdev=3623.82 00:35:07.862 clat percentiles (usec): 00:35:07.862 | 1.00th=[14484], 5.00th=[16450], 10.00th=[18220], 20.00th=[21103], 00:35:07.862 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:35:07.862 | 70.00th=[24511], 80.00th=[24773], 90.00th=[26870], 95.00th=[29754], 00:35:07.862 | 99.00th=[34866], 99.50th=[37487], 99.90th=[39584], 99.95th=[41681], 00:35:07.862 | 99.99th=[41681] 00:35:07.862 bw ( KiB/s): min= 2560, max= 2944, per=4.25%, avg=2706.65, stdev=103.49, samples=20 00:35:07.862 iops : min= 640, max= 736, avg=676.65, stdev=25.89, samples=20 00:35:07.862 lat (msec) : 20=15.29%, 50=84.71% 00:35:07.862 cpu : usr=99.20%, sys=0.52%, ctx=10, majf=0, minf=47 00:35:07.862 IO depths : 1=1.8%, 2=3.6%, 4=9.8%, 8=72.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:35:07.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 complete : 0=0.0%, 4=90.5%, 8=5.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 issued rwts: total=6776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.862 filename0: (groupid=0, jobs=1): err= 0: pid=32505: Wed Nov 20 14:55:13 2024 00:35:07.862 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10007msec) 00:35:07.862 slat (usec): min=4, max=102, avg=24.52, stdev=17.32 00:35:07.862 clat (usec): min=13218, max=48317, avg=24125.69, stdev=1844.83 00:35:07.862 lat (usec): min=13269, max=48329, avg=24150.21, stdev=1843.60 00:35:07.862 clat percentiles (usec): 00:35:07.862 | 1.00th=[17433], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:35:07.862 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.862 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:35:07.862 | 99.00th=[31065], 99.50th=[34866], 99.90th=[48497], 99.95th=[48497], 00:35:07.862 | 99.99th=[48497] 00:35:07.862 bw ( KiB/s): min= 2432, max= 2704, per=4.12%, avg=2620.63, stdev=78.49, samples=19 00:35:07.862 iops : min= 608, max= 676, avg=655.16, stdev=19.62, samples=19 00:35:07.862 lat (msec) : 20=1.34%, 50=98.66% 00:35:07.862 cpu : usr=99.03%, sys=0.69%, ctx=14, majf=0, minf=48 00:35:07.862 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:07.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.862 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.862 filename1: (groupid=0, jobs=1): err= 0: pid=32506: Wed Nov 20 14:55:13 2024 00:35:07.862 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.0MiB/10003msec) 00:35:07.862 slat (nsec): min=4343, max=94868, avg=17549.90, stdev=14803.73 00:35:07.862 clat (usec): min=6767, max=44300, avg=23970.90, stdev=2913.63 00:35:07.862 lat (usec): min=6773, max=44312, avg=23988.45, stdev=2913.94 00:35:07.862 clat percentiles (usec): 00:35:07.863 | 1.00th=[15664], 5.00th=[18744], 10.00th=[20841], 20.00th=[23462], 00:35:07.863 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.863 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[28181], 00:35:07.863 | 99.00th=[33817], 99.50th=[38536], 99.90th=[44303], 99.95th=[44303], 00:35:07.863 | 99.99th=[44303] 00:35:07.863 bw ( KiB/s): min= 2432, max= 2800, per=4.16%, avg=2649.26, stdev=82.68, samples=19 00:35:07.863 iops : min= 608, max= 700, avg=662.32, stdev=20.67, samples=19 00:35:07.863 lat (msec) : 10=0.06%, 20=8.10%, 50=91.84% 00:35:07.863 cpu : usr=98.79%, sys=0.87%, ctx=110, majf=0, minf=43 00:35:07.863 IO depths : 1=1.8%, 2=4.1%, 4=10.4%, 8=70.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:35:07.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 complete : 0=0.0%, 4=90.9%, 8=6.0%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 issued rwts: total=6644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.863 filename1: (groupid=0, jobs=1): err= 0: pid=32507: Wed Nov 20 14:55:13 2024 00:35:07.863 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10006msec) 00:35:07.863 slat (nsec): min=4014, max=96099, avg=22324.90, stdev=15401.98 00:35:07.863 clat (usec): min=20606, max=38992, avg=24145.46, stdev=799.20 00:35:07.863 lat (usec): min=20612, max=39003, avg=24167.78, stdev=797.72 00:35:07.863 clat percentiles (usec): 00:35:07.863 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:35:07.863 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.863 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:35:07.863 | 99.00th=[25822], 99.50th=[26084], 99.90th=[35390], 99.95th=[35914], 00:35:07.863 | 99.99th=[39060] 00:35:07.863 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2624.25, stdev=65.42, samples=20 00:35:07.863 iops : min= 640, max= 672, avg=656.05, stdev=16.37, samples=20 00:35:07.863 lat (msec) : 50=100.00% 00:35:07.863 cpu : usr=99.02%, sys=0.70%, ctx=19, majf=0, minf=48 00:35:07.863 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:07.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.863 filename1: (groupid=0, jobs=1): err= 0: pid=32508: Wed Nov 20 14:55:13 2024 00:35:07.863 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.8MiB/10008msec) 00:35:07.863 slat (nsec): min=4149, max=72593, avg=12705.55, stdev=8960.63 00:35:07.863 clat (usec): min=10531, max=33382, avg=24173.48, stdev=1056.50 00:35:07.863 lat (usec): min=10537, max=33394, avg=24186.19, stdev=1056.73 00:35:07.863 clat percentiles (usec): 00:35:07.863 | 1.00th=[22414], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:35:07.863 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.863 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:35:07.863 | 99.00th=[25560], 99.50th=[25822], 99.90th=[33424], 99.95th=[33424], 00:35:07.863 | 99.99th=[33424] 00:35:07.863 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2627.37, stdev=65.66, samples=19 00:35:07.863 iops : min= 640, max= 672, avg=656.84, stdev=16.42, samples=19 00:35:07.863 lat (msec) : 20=0.52%, 50=99.48% 00:35:07.863 cpu : usr=98.79%, sys=0.75%, ctx=143, majf=0, minf=52 00:35:07.863 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:07.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.863 filename1: (groupid=0, jobs=1): err= 0: pid=32509: Wed Nov 20 14:55:13 2024 00:35:07.863 read: IOPS=655, BW=2622KiB/s (2685kB/s)(25.6MiB/10012msec) 00:35:07.863 slat (usec): min=3, max=108, avg=19.46, stdev=16.10 00:35:07.863 clat (usec): min=12153, max=41869, avg=24241.41, stdev=2012.27 00:35:07.863 lat (usec): min=12163, max=41891, avg=24260.87, stdev=2011.80 00:35:07.863 clat percentiles (usec): 00:35:07.863 | 1.00th=[16188], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:35:07.863 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.863 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:35:07.863 | 99.00th=[32375], 99.50th=[33162], 99.90th=[40633], 99.95th=[41681], 00:35:07.863 | 99.99th=[41681] 00:35:07.863 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2618.65, stdev=59.00, samples=20 00:35:07.863 iops : min= 640, max= 672, avg=654.65, stdev=14.76, samples=20 00:35:07.863 lat (msec) : 20=2.22%, 50=97.78% 00:35:07.863 cpu : usr=98.98%, sys=0.69%, ctx=100, majf=0, minf=72 00:35:07.863 IO depths : 1=5.3%, 2=11.1%, 4=23.8%, 8=52.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:07.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 issued rwts: total=6562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.863 filename1: (groupid=0, jobs=1): err= 0: pid=32510: Wed Nov 20 14:55:13 2024 00:35:07.863 read: IOPS=670, BW=2683KiB/s (2748kB/s)(26.2MiB/10017msec) 00:35:07.863 slat (nsec): min=5667, max=81450, avg=8708.15, stdev=5423.81 00:35:07.863 clat (usec): min=8329, max=26323, avg=23775.06, stdev=2168.22 00:35:07.863 lat (usec): min=8361, max=26330, avg=23783.76, stdev=2167.57 00:35:07.863 clat percentiles (usec): 00:35:07.863 | 1.00th=[13698], 5.00th=[19530], 10.00th=[23462], 20.00th=[23725], 00:35:07.863 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.863 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:35:07.863 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26346], 99.95th=[26346], 00:35:07.863 | 99.99th=[26346] 00:35:07.863 bw ( KiB/s): min= 2560, max= 2816, per=4.21%, avg=2682.35, stdev=76.63, samples=20 00:35:07.863 iops : min= 640, max= 704, avg=670.55, stdev=19.20, samples=20 00:35:07.863 lat (msec) : 10=0.48%, 20=4.76%, 50=94.76% 00:35:07.863 cpu : usr=99.05%, sys=0.68%, ctx=10, majf=0, minf=60 00:35:07.863 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:07.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.863 filename1: (groupid=0, jobs=1): err= 0: pid=32511: Wed Nov 20 14:55:13 2024 00:35:07.863 read: IOPS=678, BW=2714KiB/s (2779kB/s)(26.5MiB/10015msec) 00:35:07.863 slat (nsec): min=5652, max=99335, avg=19033.80, stdev=16389.71 00:35:07.863 clat (usec): min=8225, max=41110, avg=23433.95, stdev=3205.12 00:35:07.863 lat (usec): min=8238, max=41117, avg=23452.98, stdev=3207.52 00:35:07.863 clat percentiles (usec): 00:35:07.863 | 1.00th=[13566], 5.00th=[16188], 10.00th=[18220], 20.00th=[23462], 00:35:07.863 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:35:07.863 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25560], 00:35:07.863 | 99.00th=[31851], 99.50th=[35914], 99.90th=[40633], 99.95th=[41157], 00:35:07.863 | 99.99th=[41157] 00:35:07.863 bw ( KiB/s): min= 2560, max= 3248, per=4.26%, avg=2712.75, stdev=148.76, samples=20 00:35:07.863 iops : min= 640, max= 812, avg=678.15, stdev=37.22, samples=20 00:35:07.863 lat (msec) : 10=0.44%, 20=12.15%, 50=87.40% 00:35:07.863 cpu : usr=98.95%, sys=0.61%, ctx=76, majf=0, minf=51 00:35:07.863 IO depths : 1=2.7%, 2=7.0%, 4=20.0%, 8=60.4%, 16=9.9%, 32=0.0%, >=64=0.0% 00:35:07.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 issued rwts: total=6796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.863 filename1: (groupid=0, jobs=1): err= 0: pid=32512: Wed Nov 20 14:55:13 2024 00:35:07.863 read: IOPS=656, BW=2627KiB/s (2690kB/s)(25.7MiB/10014msec) 00:35:07.863 slat (nsec): min=4019, max=90101, avg=11811.10, stdev=9884.91 00:35:07.863 clat (usec): min=11221, max=41665, avg=24261.36, stdev=1276.72 00:35:07.863 lat (usec): min=11232, max=41677, avg=24273.17, stdev=1276.27 00:35:07.863 clat percentiles (usec): 00:35:07.863 | 1.00th=[22414], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:35:07.863 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:35:07.863 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:35:07.863 | 99.00th=[28181], 99.50th=[31851], 99.90th=[41681], 99.95th=[41681], 00:35:07.863 | 99.99th=[41681] 00:35:07.863 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2622.65, stdev=64.55, samples=20 00:35:07.863 iops : min= 640, max= 672, avg=655.65, stdev=16.13, samples=20 00:35:07.863 lat (msec) : 20=0.82%, 50=99.18% 00:35:07.863 cpu : usr=99.17%, sys=0.54%, ctx=11, majf=0, minf=56 00:35:07.863 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:07.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.863 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.863 filename1: (groupid=0, jobs=1): err= 0: pid=32513: Wed Nov 20 14:55:13 2024 00:35:07.863 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10005msec) 00:35:07.863 slat (nsec): min=4066, max=99898, avg=21488.57, stdev=13953.29 00:35:07.863 clat (usec): min=13347, max=44438, avg=24140.75, stdev=1303.73 00:35:07.863 lat (usec): min=13358, max=44454, avg=24162.23, stdev=1303.17 00:35:07.863 clat percentiles (usec): 00:35:07.863 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:35:07.863 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.863 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:35:07.863 | 99.00th=[25822], 99.50th=[28443], 99.90th=[44303], 99.95th=[44303], 00:35:07.863 | 99.99th=[44303] 00:35:07.863 bw ( KiB/s): min= 2436, max= 2688, per=4.12%, avg=2620.84, stdev=77.78, samples=19 00:35:07.863 iops : min= 609, max= 672, avg=655.21, stdev=19.44, samples=19 00:35:07.864 lat (msec) : 20=0.33%, 50=99.67% 00:35:07.864 cpu : usr=99.01%, sys=0.67%, ctx=83, majf=0, minf=75 00:35:07.864 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.864 filename2: (groupid=0, jobs=1): err= 0: pid=32514: Wed Nov 20 14:55:13 2024 00:35:07.864 read: IOPS=687, BW=2749KiB/s (2815kB/s)(26.9MiB/10014msec) 00:35:07.864 slat (nsec): min=4066, max=99962, avg=14644.49, stdev=13387.59 00:35:07.864 clat (usec): min=8446, max=41768, avg=23169.70, stdev=4032.51 00:35:07.864 lat (usec): min=8483, max=41776, avg=23184.34, stdev=4034.48 00:35:07.864 clat percentiles (usec): 00:35:07.864 | 1.00th=[13829], 5.00th=[15926], 10.00th=[17171], 20.00th=[20317], 00:35:07.864 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:35:07.864 | 70.00th=[24249], 80.00th=[24773], 90.00th=[26084], 95.00th=[29230], 00:35:07.864 | 99.00th=[37487], 99.50th=[39060], 99.90th=[40633], 99.95th=[41681], 00:35:07.864 | 99.99th=[41681] 00:35:07.864 bw ( KiB/s): min= 2560, max= 2896, per=4.32%, avg=2746.65, stdev=90.67, samples=20 00:35:07.864 iops : min= 640, max= 724, avg=686.65, stdev=22.66, samples=20 00:35:07.864 lat (msec) : 10=0.15%, 20=19.14%, 50=80.72% 00:35:07.864 cpu : usr=98.76%, sys=0.87%, ctx=49, majf=0, minf=46 00:35:07.864 IO depths : 1=2.6%, 2=5.2%, 4=12.9%, 8=68.4%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 issued rwts: total=6882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.864 filename2: (groupid=0, jobs=1): err= 0: pid=32515: Wed Nov 20 14:55:13 2024 00:35:07.864 read: IOPS=669, BW=2676KiB/s (2740kB/s)(26.1MiB/10002msec) 00:35:07.864 slat (nsec): min=4372, max=88014, avg=17263.98, stdev=12982.09 00:35:07.864 clat (usec): min=10021, max=44167, avg=23769.74, stdev=3396.92 00:35:07.864 lat (usec): min=10029, max=44179, avg=23787.01, stdev=3397.97 00:35:07.864 clat percentiles (usec): 00:35:07.864 | 1.00th=[12518], 5.00th=[16712], 10.00th=[20841], 20.00th=[23462], 00:35:07.864 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.864 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25560], 00:35:07.864 | 99.00th=[36439], 99.50th=[38011], 99.90th=[44303], 99.95th=[44303], 00:35:07.864 | 99.99th=[44303] 00:35:07.864 bw ( KiB/s): min= 2432, max= 3008, per=4.19%, avg=2669.47, stdev=118.57, samples=19 00:35:07.864 iops : min= 608, max= 752, avg=667.37, stdev=29.64, samples=19 00:35:07.864 lat (msec) : 20=8.74%, 50=91.26% 00:35:07.864 cpu : usr=99.12%, sys=0.62%, ctx=29, majf=0, minf=38 00:35:07.864 IO depths : 1=4.1%, 2=8.7%, 4=19.9%, 8=58.4%, 16=8.9%, 32=0.0%, >=64=0.0% 00:35:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 complete : 0=0.0%, 4=92.9%, 8=1.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 issued rwts: total=6692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.864 filename2: (groupid=0, jobs=1): err= 0: pid=32516: Wed Nov 20 14:55:13 2024 00:35:07.864 read: IOPS=662, BW=2651KiB/s (2715kB/s)(25.9MiB/10017msec) 00:35:07.864 slat (nsec): min=5653, max=67455, avg=9250.54, stdev=5429.24 00:35:07.864 clat (usec): min=6379, max=37955, avg=24055.95, stdev=1900.54 00:35:07.864 lat (usec): min=6388, max=37962, avg=24065.20, stdev=1900.08 00:35:07.864 clat percentiles (usec): 00:35:07.864 | 1.00th=[14615], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:35:07.864 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.864 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:35:07.864 | 99.00th=[25822], 99.50th=[25822], 99.90th=[37487], 99.95th=[37487], 00:35:07.864 | 99.99th=[38011] 00:35:07.864 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2650.35, stdev=72.63, samples=20 00:35:07.864 iops : min= 640, max= 704, avg=662.55, stdev=18.18, samples=20 00:35:07.864 lat (msec) : 10=0.48%, 20=1.45%, 50=98.07% 00:35:07.864 cpu : usr=98.93%, sys=0.80%, ctx=9, majf=0, minf=48 00:35:07.864 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.864 filename2: (groupid=0, jobs=1): err= 0: pid=32517: Wed Nov 20 14:55:13 2024 00:35:07.864 read: IOPS=654, BW=2616KiB/s (2679kB/s)(25.6MiB/10009msec) 00:35:07.864 slat (nsec): min=4072, max=94857, avg=13605.85, stdev=12141.19 00:35:07.864 clat (usec): min=10452, max=47261, avg=24402.05, stdev=2878.87 00:35:07.864 lat (usec): min=10458, max=47268, avg=24415.66, stdev=2878.78 00:35:07.864 clat percentiles (usec): 00:35:07.864 | 1.00th=[15795], 5.00th=[19792], 10.00th=[23200], 20.00th=[23725], 00:35:07.864 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:35:07.864 | 70.00th=[24511], 80.00th=[25035], 90.00th=[25560], 95.00th=[29230], 00:35:07.864 | 99.00th=[35390], 99.50th=[39060], 99.90th=[43779], 99.95th=[47449], 00:35:07.864 | 99.99th=[47449] 00:35:07.864 bw ( KiB/s): min= 2480, max= 2752, per=4.10%, avg=2609.68, stdev=60.78, samples=19 00:35:07.864 iops : min= 620, max= 688, avg=652.42, stdev=15.20, samples=19 00:35:07.864 lat (msec) : 20=5.47%, 50=94.53% 00:35:07.864 cpu : usr=98.62%, sys=0.83%, ctx=207, majf=0, minf=59 00:35:07.864 IO depths : 1=0.1%, 2=0.3%, 4=2.3%, 8=80.1%, 16=17.2%, 32=0.0%, >=64=0.0% 00:35:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 complete : 0=0.0%, 4=89.5%, 8=9.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 issued rwts: total=6547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.864 filename2: (groupid=0, jobs=1): err= 0: pid=32518: Wed Nov 20 14:55:13 2024 00:35:07.864 read: IOPS=684, BW=2736KiB/s (2802kB/s)(26.8MiB/10017msec) 00:35:07.864 slat (nsec): min=5494, max=91997, avg=14133.65, stdev=12159.14 00:35:07.864 clat (usec): min=9275, max=40813, avg=23295.91, stdev=4420.10 00:35:07.864 lat (usec): min=9284, max=40827, avg=23310.04, stdev=4421.79 00:35:07.864 clat percentiles (usec): 00:35:07.864 | 1.00th=[12780], 5.00th=[15926], 10.00th=[17433], 20.00th=[19792], 00:35:07.864 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:35:07.864 | 70.00th=[24511], 80.00th=[24773], 90.00th=[27132], 95.00th=[30540], 00:35:07.864 | 99.00th=[38011], 99.50th=[38536], 99.90th=[40109], 99.95th=[40633], 00:35:07.864 | 99.99th=[40633] 00:35:07.864 bw ( KiB/s): min= 2565, max= 2976, per=4.30%, avg=2735.15, stdev=110.45, samples=20 00:35:07.864 iops : min= 641, max= 744, avg=683.75, stdev=27.65, samples=20 00:35:07.864 lat (msec) : 10=0.19%, 20=21.00%, 50=78.81% 00:35:07.864 cpu : usr=98.86%, sys=0.81%, ctx=36, majf=0, minf=43 00:35:07.864 IO depths : 1=1.6%, 2=3.8%, 4=10.4%, 8=71.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:35:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 complete : 0=0.0%, 4=90.4%, 8=5.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 issued rwts: total=6852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.864 filename2: (groupid=0, jobs=1): err= 0: pid=32519: Wed Nov 20 14:55:13 2024 00:35:07.864 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10006msec) 00:35:07.864 slat (usec): min=4, max=104, avg=18.92, stdev=14.46 00:35:07.864 clat (usec): min=19674, max=36524, avg=24183.77, stdev=823.54 00:35:07.864 lat (usec): min=19681, max=36542, avg=24202.69, stdev=822.15 00:35:07.864 clat percentiles (usec): 00:35:07.864 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:35:07.864 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.864 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:35:07.864 | 99.00th=[25822], 99.50th=[26084], 99.90th=[36439], 99.95th=[36439], 00:35:07.864 | 99.99th=[36439] 00:35:07.864 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2624.00, stdev=65.66, samples=20 00:35:07.864 iops : min= 640, max= 672, avg=656.00, stdev=16.42, samples=20 00:35:07.864 lat (msec) : 20=0.06%, 50=99.94% 00:35:07.864 cpu : usr=98.85%, sys=0.77%, ctx=121, majf=0, minf=74 00:35:07.864 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:07.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.864 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.864 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.864 filename2: (groupid=0, jobs=1): err= 0: pid=32520: Wed Nov 20 14:55:13 2024 00:35:07.864 read: IOPS=657, BW=2631KiB/s (2695kB/s)(25.7MiB/10005msec) 00:35:07.864 slat (nsec): min=4168, max=96141, avg=21163.02, stdev=16935.91 00:35:07.864 clat (usec): min=10972, max=42125, avg=24132.55, stdev=1908.82 00:35:07.865 lat (usec): min=10979, max=42138, avg=24153.72, stdev=1907.85 00:35:07.865 clat percentiles (usec): 00:35:07.865 | 1.00th=[16188], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:35:07.865 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.865 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:35:07.865 | 99.00th=[32113], 99.50th=[33424], 99.90th=[42206], 99.95th=[42206], 00:35:07.865 | 99.99th=[42206] 00:35:07.865 bw ( KiB/s): min= 2432, max= 2736, per=4.12%, avg=2623.16, stdev=80.08, samples=19 00:35:07.865 iops : min= 608, max= 684, avg=655.79, stdev=20.02, samples=19 00:35:07.865 lat (msec) : 20=2.25%, 50=97.75% 00:35:07.865 cpu : usr=98.49%, sys=0.96%, ctx=161, majf=0, minf=43 00:35:07.865 IO depths : 1=5.7%, 2=11.6%, 4=24.0%, 8=51.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:07.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.865 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.865 issued rwts: total=6582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.865 filename2: (groupid=0, jobs=1): err= 0: pid=32521: Wed Nov 20 14:55:13 2024 00:35:07.865 read: IOPS=658, BW=2634KiB/s (2697kB/s)(25.8MiB/10012msec) 00:35:07.865 slat (usec): min=3, max=101, avg=19.46, stdev=15.02 00:35:07.865 clat (usec): min=13213, max=32390, avg=24115.59, stdev=867.20 00:35:07.865 lat (usec): min=13219, max=32415, avg=24135.05, stdev=865.77 00:35:07.865 clat percentiles (usec): 00:35:07.865 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:35:07.865 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:35:07.865 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:35:07.865 | 99.00th=[25822], 99.50th=[25822], 99.90th=[29754], 99.95th=[29754], 00:35:07.865 | 99.99th=[32375] 00:35:07.865 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2630.65, stdev=65.06, samples=20 00:35:07.865 iops : min= 640, max= 672, avg=657.65, stdev=16.28, samples=20 00:35:07.865 lat (msec) : 20=0.27%, 50=99.73% 00:35:07.865 cpu : usr=98.67%, sys=0.78%, ctx=134, majf=0, minf=40 00:35:07.865 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:07.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.865 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.865 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.865 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:07.865 00:35:07.865 Run status group 0 (all jobs): 00:35:07.865 READ: bw=62.1MiB/s (65.2MB/s), 2611KiB/s-2749KiB/s (2674kB/s-2815kB/s), io=625MiB (655MB), run=10002-10050msec 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 bdev_null0 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 [2024-11-20 14:55:13.284795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 bdev_null1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:07.865 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:07.866 { 00:35:07.866 "params": { 00:35:07.866 "name": "Nvme$subsystem", 00:35:07.866 "trtype": "$TEST_TRANSPORT", 00:35:07.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:07.866 "adrfam": "ipv4", 00:35:07.866 "trsvcid": "$NVMF_PORT", 00:35:07.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:07.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:07.866 "hdgst": ${hdgst:-false}, 00:35:07.866 "ddgst": ${ddgst:-false} 00:35:07.866 }, 00:35:07.866 "method": "bdev_nvme_attach_controller" 00:35:07.866 } 00:35:07.866 EOF 00:35:07.866 )") 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:07.866 { 00:35:07.866 "params": { 00:35:07.866 "name": "Nvme$subsystem", 00:35:07.866 "trtype": "$TEST_TRANSPORT", 00:35:07.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:07.866 "adrfam": "ipv4", 00:35:07.866 "trsvcid": "$NVMF_PORT", 00:35:07.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:07.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:07.866 "hdgst": ${hdgst:-false}, 00:35:07.866 "ddgst": ${ddgst:-false} 00:35:07.866 }, 00:35:07.866 "method": "bdev_nvme_attach_controller" 00:35:07.866 } 00:35:07.866 EOF 00:35:07.866 )") 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:07.866 "params": { 00:35:07.866 "name": "Nvme0", 00:35:07.866 "trtype": "tcp", 00:35:07.866 "traddr": "10.0.0.2", 00:35:07.866 "adrfam": "ipv4", 00:35:07.866 "trsvcid": "4420", 00:35:07.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:07.866 "hdgst": false, 00:35:07.866 "ddgst": false 00:35:07.866 }, 00:35:07.866 "method": "bdev_nvme_attach_controller" 00:35:07.866 },{ 00:35:07.866 "params": { 00:35:07.866 "name": "Nvme1", 00:35:07.866 "trtype": "tcp", 00:35:07.866 "traddr": "10.0.0.2", 00:35:07.866 "adrfam": "ipv4", 00:35:07.866 "trsvcid": "4420", 00:35:07.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:07.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:07.866 "hdgst": false, 00:35:07.866 "ddgst": false 00:35:07.866 }, 00:35:07.866 "method": "bdev_nvme_attach_controller" 00:35:07.866 }' 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:07.866 14:55:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.866 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:07.866 ... 00:35:07.866 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:07.866 ... 00:35:07.866 fio-3.35 00:35:07.866 Starting 4 threads 00:35:13.141 00:35:13.141 filename0: (groupid=0, jobs=1): err= 0: pid=35011: Wed Nov 20 14:55:19 2024 00:35:13.141 read: IOPS=2971, BW=23.2MiB/s (24.3MB/s)(116MiB/5002msec) 00:35:13.141 slat (nsec): min=2868, max=26585, avg=6030.25, stdev=1814.60 00:35:13.141 clat (usec): min=1372, max=4807, avg=2675.46, stdev=266.44 00:35:13.141 lat (usec): min=1378, max=4813, avg=2681.49, stdev=266.43 00:35:13.141 clat percentiles (usec): 00:35:13.141 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2540], 00:35:13.141 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:35:13.141 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2999], 00:35:13.141 | 99.00th=[ 3720], 99.50th=[ 3916], 99.90th=[ 4228], 99.95th=[ 4359], 00:35:13.141 | 99.99th=[ 4686] 00:35:13.141 bw ( KiB/s): min=23616, max=24032, per=25.42%, avg=23779.20, stdev=143.86, samples=10 00:35:13.141 iops : min= 2952, max= 3004, avg=2972.40, stdev=17.98, samples=10 00:35:13.141 lat (msec) : 2=1.41%, 4=98.24%, 10=0.34% 00:35:13.141 cpu : usr=96.96%, sys=2.82%, ctx=5, majf=0, minf=20 00:35:13.141 IO depths : 1=0.1%, 2=0.3%, 4=72.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.141 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.141 issued rwts: total=14865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:13.141 filename0: (groupid=0, jobs=1): err= 0: pid=35012: Wed Nov 20 14:55:19 2024 00:35:13.141 read: IOPS=2900, BW=22.7MiB/s (23.8MB/s)(113MiB/5002msec) 00:35:13.141 slat (nsec): min=2865, max=28013, avg=6083.05, stdev=1916.74 00:35:13.141 clat (usec): min=1774, max=6059, avg=2741.86, stdev=303.45 00:35:13.141 lat (usec): min=1780, max=6069, avg=2747.94, stdev=303.46 00:35:13.141 clat percentiles (usec): 00:35:13.141 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2573], 00:35:13.141 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:35:13.141 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 3261], 00:35:13.141 | 99.00th=[ 4047], 99.50th=[ 4113], 99.90th=[ 4621], 99.95th=[ 5276], 00:35:13.141 | 99.99th=[ 5342] 00:35:13.141 bw ( KiB/s): min=22989, max=23392, per=24.81%, avg=23206.10, stdev=116.05, samples=10 00:35:13.141 iops : min= 2873, max= 2924, avg=2900.70, stdev=14.64, samples=10 00:35:13.141 lat (msec) : 2=0.20%, 4=98.41%, 10=1.39% 00:35:13.141 cpu : usr=96.84%, sys=2.94%, ctx=6, majf=0, minf=54 00:35:13.141 IO depths : 1=0.1%, 2=0.1%, 4=71.4%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.141 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.141 issued rwts: total=14509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:13.141 filename1: (groupid=0, jobs=1): err= 0: pid=35013: Wed Nov 20 14:55:19 2024 00:35:13.141 read: IOPS=2952, BW=23.1MiB/s (24.2MB/s)(115MiB/5001msec) 00:35:13.141 slat (nsec): min=2866, max=27604, avg=6125.08, stdev=1887.28 00:35:13.141 clat (usec): min=1219, max=5942, avg=2693.43, stdev=282.40 00:35:13.141 lat (usec): min=1225, max=5952, avg=2699.56, stdev=282.37 00:35:13.141 clat percentiles (usec): 00:35:13.141 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:35:13.141 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:35:13.141 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 3032], 00:35:13.141 | 99.00th=[ 3851], 99.50th=[ 4047], 99.90th=[ 4424], 99.95th=[ 5932], 00:35:13.141 | 99.99th=[ 5932] 00:35:13.141 bw ( KiB/s): min=23022, max=23920, per=25.25%, avg=23623.80, stdev=251.51, samples=10 00:35:13.141 iops : min= 2877, max= 2990, avg=2952.90, stdev=31.64, samples=10 00:35:13.141 lat (msec) : 2=1.27%, 4=98.20%, 10=0.53% 00:35:13.141 cpu : usr=97.16%, sys=2.60%, ctx=5, majf=0, minf=38 00:35:13.141 IO depths : 1=0.1%, 2=0.1%, 4=71.0%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.141 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.141 issued rwts: total=14767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:13.141 filename1: (groupid=0, jobs=1): err= 0: pid=35014: Wed Nov 20 14:55:19 2024 00:35:13.141 read: IOPS=2868, BW=22.4MiB/s (23.5MB/s)(112MiB/5001msec) 00:35:13.141 slat (nsec): min=3983, max=26954, avg=6494.38, stdev=2114.04 00:35:13.141 clat (usec): min=1358, max=5703, avg=2770.49, stdev=269.33 00:35:13.141 lat (usec): min=1364, max=5716, avg=2776.99, stdev=269.31 00:35:13.141 clat percentiles (usec): 00:35:13.141 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2671], 00:35:13.141 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:35:13.141 | 70.00th=[ 2737], 80.00th=[ 2835], 90.00th=[ 2999], 95.00th=[ 3228], 00:35:13.141 | 99.00th=[ 4015], 99.50th=[ 4228], 99.90th=[ 4621], 99.95th=[ 4752], 00:35:13.141 | 99.99th=[ 5669] 00:35:13.141 bw ( KiB/s): min=22624, max=23104, per=24.50%, avg=22922.67, stdev=137.64, samples=9 00:35:13.141 iops : min= 2828, max= 2888, avg=2865.33, stdev=17.20, samples=9 00:35:13.141 lat (msec) : 2=0.20%, 4=98.77%, 10=1.03% 00:35:13.141 cpu : usr=97.50%, sys=2.28%, ctx=7, majf=0, minf=68 00:35:13.141 IO depths : 1=0.1%, 2=0.2%, 4=73.5%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.141 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.141 issued rwts: total=14347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:13.141 00:35:13.141 Run status group 0 (all jobs): 00:35:13.141 READ: bw=91.4MiB/s (95.8MB/s), 22.4MiB/s-23.2MiB/s (23.5MB/s-24.3MB/s), io=457MiB (479MB), run=5001-5002msec 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.141 00:35:13.141 real 0m23.969s 00:35:13.141 user 5m9.571s 00:35:13.141 sys 0m3.787s 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.141 14:55:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.141 ************************************ 00:35:13.141 END TEST fio_dif_rand_params 00:35:13.141 ************************************ 00:35:13.141 14:55:19 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:13.141 14:55:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:13.142 14:55:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.142 14:55:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:13.142 ************************************ 00:35:13.142 START TEST fio_dif_digest 00:35:13.142 ************************************ 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:13.142 bdev_null0 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:13.142 [2024-11-20 14:55:19.469673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:13.142 { 00:35:13.142 "params": { 00:35:13.142 "name": "Nvme$subsystem", 00:35:13.142 "trtype": "$TEST_TRANSPORT", 00:35:13.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.142 "adrfam": "ipv4", 00:35:13.142 "trsvcid": "$NVMF_PORT", 00:35:13.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.142 "hdgst": ${hdgst:-false}, 00:35:13.142 "ddgst": ${ddgst:-false} 00:35:13.142 }, 00:35:13.142 "method": "bdev_nvme_attach_controller" 00:35:13.142 } 00:35:13.142 EOF 00:35:13.142 )") 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:13.142 "params": { 00:35:13.142 "name": "Nvme0", 00:35:13.142 "trtype": "tcp", 00:35:13.142 "traddr": "10.0.0.2", 00:35:13.142 "adrfam": "ipv4", 00:35:13.142 "trsvcid": "4420", 00:35:13.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.142 "hdgst": true, 00:35:13.142 "ddgst": true 00:35:13.142 }, 00:35:13.142 "method": "bdev_nvme_attach_controller" 00:35:13.142 }' 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:13.142 14:55:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.142 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:13.142 ... 00:35:13.142 fio-3.35 00:35:13.142 Starting 3 threads 00:35:25.446 00:35:25.446 filename0: (groupid=0, jobs=1): err= 0: pid=36529: Wed Nov 20 14:55:30 2024 00:35:25.446 read: IOPS=295, BW=37.0MiB/s (38.8MB/s)(371MiB/10045msec) 00:35:25.446 slat (nsec): min=4161, max=52204, avg=7512.45, stdev=1334.68 00:35:25.446 clat (usec): min=6612, max=50646, avg=10122.77, stdev=1345.38 00:35:25.446 lat (usec): min=6620, max=50654, avg=10130.28, stdev=1345.39 00:35:25.446 clat percentiles (usec): 00:35:25.446 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:35:25.446 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:35:25.446 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:35:25.446 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13566], 99.95th=[47973], 00:35:25.446 | 99.99th=[50594] 00:35:25.447 bw ( KiB/s): min=36608, max=38912, per=33.47%, avg=37990.40, stdev=553.44, samples=20 00:35:25.447 iops : min= 286, max= 304, avg=296.80, stdev= 4.32, samples=20 00:35:25.447 lat (msec) : 10=45.19%, 20=54.75%, 50=0.03%, 100=0.03% 00:35:25.447 cpu : usr=96.27%, sys=3.45%, ctx=136, majf=0, minf=200 00:35:25.447 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.447 issued rwts: total=2970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.447 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:25.447 filename0: (groupid=0, jobs=1): err= 0: pid=36530: Wed Nov 20 14:55:30 2024 00:35:25.447 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(370MiB/10043msec) 00:35:25.447 slat (nsec): min=4349, max=36532, avg=7255.67, stdev=1391.11 00:35:25.447 clat (usec): min=7083, max=48475, avg=10169.61, stdev=1366.07 00:35:25.447 lat (usec): min=7089, max=48482, avg=10176.86, stdev=1366.10 00:35:25.447 clat percentiles (usec): 00:35:25.447 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9372], 00:35:25.447 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:35:25.447 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:35:25.447 | 99.00th=[12649], 99.50th=[13042], 99.90th=[14877], 99.95th=[46924], 00:35:25.447 | 99.99th=[48497] 00:35:25.447 bw ( KiB/s): min=35840, max=40960, per=33.31%, avg=37811.20, stdev=1218.14, samples=20 00:35:25.447 iops : min= 280, max= 320, avg=295.40, stdev= 9.52, samples=20 00:35:25.447 lat (msec) : 10=42.15%, 20=57.78%, 50=0.07% 00:35:25.447 cpu : usr=96.15%, sys=3.61%, ctx=18, majf=0, minf=204 00:35:25.447 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.447 issued rwts: total=2956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.447 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:25.447 filename0: (groupid=0, jobs=1): err= 0: pid=36531: Wed Nov 20 14:55:30 2024 00:35:25.447 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(373MiB/10044msec) 00:35:25.447 slat (nsec): min=4458, max=37429, avg=7533.93, stdev=1447.14 00:35:25.447 clat (usec): min=6775, max=51283, avg=10084.80, stdev=1358.88 00:35:25.447 lat (usec): min=6782, max=51290, avg=10092.33, stdev=1358.91 00:35:25.447 clat percentiles (usec): 00:35:25.447 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9372], 00:35:25.447 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:35:25.447 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:35:25.447 | 99.00th=[12387], 99.50th=[12649], 99.90th=[14353], 99.95th=[45876], 00:35:25.447 | 99.99th=[51119] 00:35:25.447 bw ( KiB/s): min=36096, max=42240, per=33.60%, avg=38134.85, stdev=1328.44, samples=20 00:35:25.447 iops : min= 282, max= 330, avg=297.90, stdev=10.41, samples=20 00:35:25.447 lat (msec) : 10=47.00%, 20=52.94%, 50=0.03%, 100=0.03% 00:35:25.447 cpu : usr=95.80%, sys=3.94%, ctx=25, majf=0, minf=113 00:35:25.447 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.447 issued rwts: total=2981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.447 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:25.447 00:35:25.447 Run status group 0 (all jobs): 00:35:25.447 READ: bw=111MiB/s (116MB/s), 36.8MiB/s-37.1MiB/s (38.6MB/s-38.9MB/s), io=1113MiB (1167MB), run=10043-10045msec 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.447 00:35:25.447 real 0m11.069s 00:35:25.447 user 0m42.601s 00:35:25.447 sys 0m1.371s 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:25.447 14:55:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.447 ************************************ 00:35:25.447 END TEST fio_dif_digest 00:35:25.447 ************************************ 00:35:25.447 14:55:30 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:25.447 14:55:30 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:25.447 rmmod nvme_tcp 00:35:25.447 rmmod nvme_fabrics 00:35:25.447 rmmod nvme_keyring 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 25122 ']' 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 25122 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 25122 ']' 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 25122 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 25122 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 25122' 00:35:25.447 killing process with pid 25122 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@973 -- # kill 25122 00:35:25.447 14:55:30 nvmf_dif -- common/autotest_common.sh@978 -- # wait 25122 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:25.447 14:55:30 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:25.706 Waiting for block devices as requested 00:35:25.706 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:25.965 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:25.965 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:25.965 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:25.965 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:26.224 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:26.224 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:26.224 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:26.224 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:26.483 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:26.483 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:26.483 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:26.483 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:26.483 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:26.742 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:26.742 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:26.742 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:26.742 14:55:33 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:26.742 14:55:33 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:26.742 14:55:33 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:26.742 14:55:33 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:26.742 14:55:33 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:26.742 14:55:33 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:26.742 14:55:33 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:26.742 14:55:33 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:26.742 14:55:33 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.742 14:55:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:26.742 14:55:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.278 14:55:35 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:29.278 00:35:29.278 real 1m11.651s 00:35:29.278 user 7m49.494s 00:35:29.278 sys 0m16.674s 00:35:29.278 14:55:35 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.278 14:55:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.278 ************************************ 00:35:29.278 END TEST nvmf_dif 00:35:29.278 ************************************ 00:35:29.278 14:55:35 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:29.278 14:55:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:29.278 14:55:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.278 14:55:35 -- common/autotest_common.sh@10 -- # set +x 00:35:29.278 ************************************ 00:35:29.278 START TEST nvmf_abort_qd_sizes 00:35:29.278 ************************************ 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:29.278 * Looking for test storage... 00:35:29.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:29.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.278 --rc genhtml_branch_coverage=1 00:35:29.278 --rc genhtml_function_coverage=1 00:35:29.278 --rc genhtml_legend=1 00:35:29.278 --rc geninfo_all_blocks=1 00:35:29.278 --rc geninfo_unexecuted_blocks=1 00:35:29.278 00:35:29.278 ' 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:29.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.278 --rc genhtml_branch_coverage=1 00:35:29.278 --rc genhtml_function_coverage=1 00:35:29.278 --rc genhtml_legend=1 00:35:29.278 --rc geninfo_all_blocks=1 00:35:29.278 --rc geninfo_unexecuted_blocks=1 00:35:29.278 00:35:29.278 ' 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:29.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.278 --rc genhtml_branch_coverage=1 00:35:29.278 --rc genhtml_function_coverage=1 00:35:29.278 --rc genhtml_legend=1 00:35:29.278 --rc geninfo_all_blocks=1 00:35:29.278 --rc geninfo_unexecuted_blocks=1 00:35:29.278 00:35:29.278 ' 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:29.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.278 --rc genhtml_branch_coverage=1 00:35:29.278 --rc genhtml_function_coverage=1 00:35:29.278 --rc genhtml_legend=1 00:35:29.278 --rc geninfo_all_blocks=1 00:35:29.278 --rc geninfo_unexecuted_blocks=1 00:35:29.278 00:35:29.278 ' 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.278 14:55:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:29.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:29.279 14:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:34.552 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:34.552 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.552 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:34.552 Found net devices under 0000:31:00.0: cvl_0_0 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:34.553 Found net devices under 0000:31:00.1: cvl_0_1 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:34.553 14:55:40 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.553 14:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.553 14:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.553 14:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.553 14:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:34.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:35:34.553 00:35:34.553 --- 10.0.0.2 ping statistics --- 00:35:34.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.553 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:35:34.553 14:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:35:34.553 00:35:34.553 --- 10.0.0.1 ping statistics --- 00:35:34.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.553 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:35:34.553 14:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.553 14:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:34.553 14:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:34.553 14:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:36.459 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:36.459 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=46185 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 46185 00:35:36.459 14:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 46185 ']' 00:35:36.460 14:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.460 14:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:36.460 14:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.460 14:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:36.460 14:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:36.460 [2024-11-20 14:55:43.511868] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:35:36.460 [2024-11-20 14:55:43.511930] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:36.720 [2024-11-20 14:55:43.603191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:36.720 [2024-11-20 14:55:43.658022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.720 [2024-11-20 14:55:43.658079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.720 [2024-11-20 14:55:43.658087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.720 [2024-11-20 14:55:43.658094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.720 [2024-11-20 14:55:43.658100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.720 [2024-11-20 14:55:43.660555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.720 [2024-11-20 14:55:43.660726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:36.720 [2024-11-20 14:55:43.660885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:36.720 [2024-11-20 14:55:43.660886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.290 14:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.290 14:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:37.290 14:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:37.290 14:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:37.290 14:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:37.550 14:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.550 ************************************ 00:35:37.550 START TEST spdk_target_abort 00:35:37.550 ************************************ 00:35:37.550 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:37.550 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:37.550 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:35:37.550 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.550 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.809 spdk_targetn1 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.809 [2024-11-20 14:55:44.707420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.809 [2024-11-20 14:55:44.747793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:37.809 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:37.810 14:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:38.069 [2024-11-20 14:55:44.992914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:24 len:8 PRP1 0x200004abe000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:44.992946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0004 p:1 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.000834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:272 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.000855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0023 p:1 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.008738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:512 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.008757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0042 p:1 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.024771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1032 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.024792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0082 p:1 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.032954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1264 len:8 PRP1 0x200004abe000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.032973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00a0 p:1 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.044983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1816 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.045003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00e4 p:1 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.045224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1832 len:8 PRP1 0x200004abe000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.045238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e6 p:1 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.047243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1960 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.047265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f6 p:1 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.053123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2080 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.053141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.075840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2864 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.075861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.094821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3416 len:8 PRP1 0x200004abe000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.094841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00ad p:0 m:0 dnr:0 00:35:38.069 [2024-11-20 14:55:45.109796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3880 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:35:38.069 [2024-11-20 14:55:45.109816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00e9 p:0 m:0 dnr:0 00:35:41.359 Initializing NVMe Controllers 00:35:41.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:41.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:41.359 Initialization complete. Launching workers. 00:35:41.359 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12682, failed: 12 00:35:41.359 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2982, failed to submit 9712 00:35:41.359 success 758, unsuccessful 2224, failed 0 00:35:41.359 14:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:41.359 14:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:41.359 [2024-11-20 14:55:48.137089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:296 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:35:41.359 [2024-11-20 14:55:48.137119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:35:41.359 [2024-11-20 14:55:48.201064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:1880 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:35:41.359 [2024-11-20 14:55:48.201086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00f6 p:1 m:0 dnr:0 00:35:41.359 [2024-11-20 14:55:48.224946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:2448 len:8 PRP1 0x200004e52000 PRP2 0x0 00:35:41.359 [2024-11-20 14:55:48.224965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:41.359 [2024-11-20 14:55:48.245591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:2944 len:8 PRP1 0x200004e52000 PRP2 0x0 00:35:41.359 [2024-11-20 14:55:48.245610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:41.359 [2024-11-20 14:55:48.256030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:3136 len:8 PRP1 0x200004e48000 PRP2 0x0 00:35:41.359 [2024-11-20 14:55:48.256048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0093 p:0 m:0 dnr:0 00:35:41.359 [2024-11-20 14:55:48.272045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:3600 len:8 PRP1 0x200004e58000 PRP2 0x0 00:35:41.359 [2024-11-20 14:55:48.272063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:00c7 p:0 m:0 dnr:0 00:35:41.926 [2024-11-20 14:55:48.887169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:17872 len:8 PRP1 0x200004e3a000 PRP2 0x0 00:35:41.926 [2024-11-20 14:55:48.887194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00bb p:1 m:0 dnr:0 00:35:42.859 [2024-11-20 14:55:49.782926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:38104 len:8 PRP1 0x200004e64000 PRP2 0x0 00:35:42.859 [2024-11-20 14:55:49.782955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:00a4 p:1 m:0 dnr:0 00:35:43.119 [2024-11-20 14:55:49.990128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:43096 len:8 PRP1 0x200004e60000 PRP2 0x0 00:35:43.119 [2024-11-20 14:55:49.990150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:44.497 Initializing NVMe Controllers 00:35:44.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:44.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:44.497 Initialization complete. Launching workers. 00:35:44.497 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8621, failed: 9 00:35:44.497 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1235, failed to submit 7395 00:35:44.497 success 299, unsuccessful 936, failed 0 00:35:44.497 14:55:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:44.497 14:55:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:47.035 [2024-11-20 14:55:53.612841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:147 nsid:1 lba:253032 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:35:47.035 [2024-11-20 14:55:53.612871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:147 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:47.294 [2024-11-20 14:55:54.269689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:157 nsid:1 lba:330368 len:8 PRP1 0x200004b0e000 PRP2 0x0 00:35:47.294 [2024-11-20 14:55:54.269714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:157 cdw0:0 sqhd:00e9 p:1 m:0 dnr:0 00:35:47.553 Initializing NVMe Controllers 00:35:47.553 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:47.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:47.553 Initialization complete. Launching workers. 00:35:47.553 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43990, failed: 2 00:35:47.553 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2678, failed to submit 41314 00:35:47.553 success 608, unsuccessful 2070, failed 0 00:35:47.553 14:55:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:47.553 14:55:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.553 14:55:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:47.553 14:55:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.553 14:55:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:47.553 14:55:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.553 14:55:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 46185 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 46185 ']' 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 46185 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 46185 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 46185' 00:35:49.459 killing process with pid 46185 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 46185 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 46185 00:35:49.459 00:35:49.459 real 0m12.081s 00:35:49.459 user 0m49.056s 00:35:49.459 sys 0m1.971s 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:49.459 14:55:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.459 ************************************ 00:35:49.459 END TEST spdk_target_abort 00:35:49.459 ************************************ 00:35:49.459 14:55:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:49.459 14:55:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:49.459 14:55:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:49.459 14:55:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:49.719 ************************************ 00:35:49.719 START TEST kernel_target_abort 00:35:49.719 ************************************ 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:49.719 14:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:52.257 Waiting for block devices as requested 00:35:52.257 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:52.257 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:52.257 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:52.257 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:52.257 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:52.257 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:52.258 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:52.258 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:52.258 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:52.517 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:52.517 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:52.517 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:52.517 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:52.778 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:52.778 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:52.778 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:52.778 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:52.778 No valid GPT data, bailing 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:52.778 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:35:53.038 00:35:53.038 Discovery Log Number of Records 2, Generation counter 2 00:35:53.038 =====Discovery Log Entry 0====== 00:35:53.038 trtype: tcp 00:35:53.038 adrfam: ipv4 00:35:53.038 subtype: current discovery subsystem 00:35:53.038 treq: not specified, sq flow control disable supported 00:35:53.038 portid: 1 00:35:53.038 trsvcid: 4420 00:35:53.038 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:53.038 traddr: 10.0.0.1 00:35:53.038 eflags: none 00:35:53.038 sectype: none 00:35:53.038 =====Discovery Log Entry 1====== 00:35:53.038 trtype: tcp 00:35:53.038 adrfam: ipv4 00:35:53.038 subtype: nvme subsystem 00:35:53.038 treq: not specified, sq flow control disable supported 00:35:53.038 portid: 1 00:35:53.038 trsvcid: 4420 00:35:53.038 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:53.038 traddr: 10.0.0.1 00:35:53.038 eflags: none 00:35:53.038 sectype: none 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:53.038 14:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:56.328 Initializing NVMe Controllers 00:35:56.328 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:56.328 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:56.328 Initialization complete. Launching workers. 00:35:56.328 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95179, failed: 0 00:35:56.328 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95179, failed to submit 0 00:35:56.328 success 0, unsuccessful 95179, failed 0 00:35:56.328 14:56:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:56.328 14:56:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:58.939 Initializing NVMe Controllers 00:35:58.939 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:58.939 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:58.939 Initialization complete. Launching workers. 00:35:58.939 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 155357, failed: 0 00:35:58.939 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39010, failed to submit 116347 00:35:58.939 success 0, unsuccessful 39010, failed 0 00:35:58.939 14:56:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:58.939 14:56:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:02.241 Initializing NVMe Controllers 00:36:02.241 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:02.241 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:02.241 Initialization complete. Launching workers. 00:36:02.241 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146458, failed: 0 00:36:02.241 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36650, failed to submit 109808 00:36:02.241 success 0, unsuccessful 36650, failed 0 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:02.241 14:56:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:04.784 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:04.784 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:06.165 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:06.165 00:36:06.165 real 0m16.691s 00:36:06.165 user 0m8.560s 00:36:06.165 sys 0m3.922s 00:36:06.165 14:56:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.165 14:56:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.165 ************************************ 00:36:06.165 END TEST kernel_target_abort 00:36:06.165 ************************************ 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:06.424 rmmod nvme_tcp 00:36:06.424 rmmod nvme_fabrics 00:36:06.424 rmmod nvme_keyring 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 46185 ']' 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 46185 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 46185 ']' 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 46185 00:36:06.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (46185) - No such process 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 46185 is not found' 00:36:06.424 Process with pid 46185 is not found 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:06.424 14:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:08.962 Waiting for block devices as requested 00:36:08.962 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:08.962 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:08.962 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:08.962 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:08.962 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:08.962 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:08.962 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:08.962 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:08.962 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:09.221 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:09.221 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:09.221 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:09.481 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:09.481 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:09.481 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:09.481 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:09.481 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:09.741 14:56:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.649 14:56:18 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:11.649 00:36:11.649 real 0m42.795s 00:36:11.649 user 1m1.021s 00:36:11.649 sys 0m13.102s 00:36:11.649 14:56:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.649 14:56:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:11.649 ************************************ 00:36:11.649 END TEST nvmf_abort_qd_sizes 00:36:11.649 ************************************ 00:36:11.649 14:56:18 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:11.649 14:56:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:11.649 14:56:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.649 14:56:18 -- common/autotest_common.sh@10 -- # set +x 00:36:11.649 ************************************ 00:36:11.649 START TEST keyring_file 00:36:11.649 ************************************ 00:36:11.649 14:56:18 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:11.649 * Looking for test storage... 00:36:11.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:11.649 14:56:18 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:11.649 14:56:18 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:36:11.649 14:56:18 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:11.910 14:56:18 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.910 14:56:18 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:11.910 14:56:18 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.910 14:56:18 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.910 --rc genhtml_branch_coverage=1 00:36:11.910 --rc genhtml_function_coverage=1 00:36:11.910 --rc genhtml_legend=1 00:36:11.910 --rc geninfo_all_blocks=1 00:36:11.910 --rc geninfo_unexecuted_blocks=1 00:36:11.910 00:36:11.910 ' 00:36:11.910 14:56:18 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.910 --rc genhtml_branch_coverage=1 00:36:11.910 --rc genhtml_function_coverage=1 00:36:11.910 --rc genhtml_legend=1 00:36:11.910 --rc geninfo_all_blocks=1 00:36:11.910 --rc geninfo_unexecuted_blocks=1 00:36:11.910 00:36:11.910 ' 00:36:11.910 14:56:18 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:11.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.910 --rc genhtml_branch_coverage=1 00:36:11.910 --rc genhtml_function_coverage=1 00:36:11.911 --rc genhtml_legend=1 00:36:11.911 --rc geninfo_all_blocks=1 00:36:11.911 --rc geninfo_unexecuted_blocks=1 00:36:11.911 00:36:11.911 ' 00:36:11.911 14:56:18 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:11.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.911 --rc genhtml_branch_coverage=1 00:36:11.911 --rc genhtml_function_coverage=1 00:36:11.911 --rc genhtml_legend=1 00:36:11.911 --rc geninfo_all_blocks=1 00:36:11.911 --rc geninfo_unexecuted_blocks=1 00:36:11.911 00:36:11.911 ' 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.911 14:56:18 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.911 14:56:18 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.911 14:56:18 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.911 14:56:18 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.911 14:56:18 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.911 14:56:18 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.911 14:56:18 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.911 14:56:18 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:11.911 14:56:18 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:11.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.D7hXTnT0FK 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.D7hXTnT0FK 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.D7hXTnT0FK 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.D7hXTnT0FK 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gvLXhKoVMM 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:11.911 14:56:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gvLXhKoVMM 00:36:11.911 14:56:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gvLXhKoVMM 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.gvLXhKoVMM 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@30 -- # tgtpid=56703 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@32 -- # waitforlisten 56703 00:36:11.911 14:56:18 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 56703 ']' 00:36:11.911 14:56:18 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.911 14:56:18 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.911 14:56:18 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.911 14:56:18 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.911 14:56:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:11.911 14:56:18 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:11.911 [2024-11-20 14:56:18.890586] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:36:11.911 [2024-11-20 14:56:18.890643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56703 ] 00:36:11.911 [2024-11-20 14:56:18.968526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.172 [2024-11-20 14:56:19.007616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.739 14:56:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.739 14:56:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:12.739 14:56:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:12.739 14:56:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.739 14:56:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:12.739 [2024-11-20 14:56:19.689827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:12.739 null0 00:36:12.740 [2024-11-20 14:56:19.721880] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:12.740 [2024-11-20 14:56:19.722378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.740 14:56:19 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:12.740 [2024-11-20 14:56:19.749933] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:12.740 request: 00:36:12.740 { 00:36:12.740 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.740 "secure_channel": false, 00:36:12.740 "listen_address": { 00:36:12.740 "trtype": "tcp", 00:36:12.740 "traddr": "127.0.0.1", 00:36:12.740 "trsvcid": "4420" 00:36:12.740 }, 00:36:12.740 "method": "nvmf_subsystem_add_listener", 00:36:12.740 "req_id": 1 00:36:12.740 } 00:36:12.740 Got JSON-RPC error response 00:36:12.740 response: 00:36:12.740 { 00:36:12.740 "code": -32602, 00:36:12.740 "message": "Invalid parameters" 00:36:12.740 } 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:12.740 14:56:19 keyring_file -- keyring/file.sh@47 -- # bperfpid=56865 00:36:12.740 14:56:19 keyring_file -- keyring/file.sh@49 -- # waitforlisten 56865 /var/tmp/bperf.sock 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 56865 ']' 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:12.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.740 14:56:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:12.740 14:56:19 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:12.740 [2024-11-20 14:56:19.794441] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:36:12.740 [2024-11-20 14:56:19.794505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56865 ] 00:36:13.000 [2024-11-20 14:56:19.878749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.000 [2024-11-20 14:56:19.932260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.569 14:56:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.569 14:56:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:13.569 14:56:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.D7hXTnT0FK 00:36:13.569 14:56:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.D7hXTnT0FK 00:36:13.827 14:56:20 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gvLXhKoVMM 00:36:13.827 14:56:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gvLXhKoVMM 00:36:14.087 14:56:20 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:14.087 14:56:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:14.087 14:56:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.087 14:56:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:14.087 14:56:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.087 14:56:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.D7hXTnT0FK == \/\t\m\p\/\t\m\p\.\D\7\h\X\T\n\T\0\F\K ]] 00:36:14.087 14:56:21 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:14.087 14:56:21 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:14.087 14:56:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.087 14:56:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:14.087 14:56:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.347 14:56:21 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.gvLXhKoVMM == \/\t\m\p\/\t\m\p\.\g\v\L\X\h\K\o\V\M\M ]] 00:36:14.347 14:56:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.347 14:56:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:14.347 14:56:21 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.347 14:56:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:14.606 14:56:21 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:14.606 14:56:21 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:14.606 14:56:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:14.865 [2024-11-20 14:56:21.709588] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:14.865 nvme0n1 00:36:14.865 14:56:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:14.865 14:56:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:14.865 14:56:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.865 14:56:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:14.865 14:56:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.865 14:56:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.124 14:56:21 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:15.124 14:56:21 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:15.124 14:56:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:15.124 14:56:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.124 14:56:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.124 14:56:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:15.124 14:56:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.124 14:56:22 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:15.124 14:56:22 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:15.383 Running I/O for 1 seconds... 00:36:16.321 21286.00 IOPS, 83.15 MiB/s 00:36:16.321 Latency(us) 00:36:16.321 [2024-11-20T13:56:23.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.321 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:16.321 nvme0n1 : 1.00 21334.12 83.34 0.00 0.00 5989.81 2348.37 10868.05 00:36:16.321 [2024-11-20T13:56:23.381Z] =================================================================================================================== 00:36:16.321 [2024-11-20T13:56:23.381Z] Total : 21334.12 83.34 0.00 0.00 5989.81 2348.37 10868.05 00:36:16.321 { 00:36:16.321 "results": [ 00:36:16.321 { 00:36:16.321 "job": "nvme0n1", 00:36:16.321 "core_mask": "0x2", 00:36:16.321 "workload": "randrw", 00:36:16.321 "percentage": 50, 00:36:16.321 "status": "finished", 00:36:16.321 "queue_depth": 128, 00:36:16.321 "io_size": 4096, 00:36:16.321 "runtime": 1.003791, 00:36:16.321 "iops": 21334.122342200717, 00:36:16.321 "mibps": 83.33641539922155, 00:36:16.321 "io_failed": 0, 00:36:16.321 "io_timeout": 0, 00:36:16.321 "avg_latency_us": 5989.808504319401, 00:36:16.321 "min_latency_us": 2348.3733333333334, 00:36:16.321 "max_latency_us": 10868.053333333333 00:36:16.321 } 00:36:16.321 ], 00:36:16.321 "core_count": 1 00:36:16.321 } 00:36:16.321 14:56:23 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:16.321 14:56:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:16.321 14:56:23 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:16.580 14:56:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:16.580 14:56:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.580 14:56:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.580 14:56:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.580 14:56:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:16.580 14:56:23 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:16.580 14:56:23 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:16.580 14:56:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:16.580 14:56:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.580 14:56:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:16.581 14:56:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.581 14:56:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.839 14:56:23 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:16.839 14:56:23 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:16.839 14:56:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:16.839 [2024-11-20 14:56:23.855138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:16.839 [2024-11-20 14:56:23.855907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11719d0 (107): Transport endpoint is not connected 00:36:16.839 [2024-11-20 14:56:23.856903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11719d0 (9): Bad file descriptor 00:36:16.839 [2024-11-20 14:56:23.857904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:16.839 [2024-11-20 14:56:23.857911] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:16.839 [2024-11-20 14:56:23.857916] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:16.839 [2024-11-20 14:56:23.857922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:16.839 request: 00:36:16.839 { 00:36:16.839 "name": "nvme0", 00:36:16.839 "trtype": "tcp", 00:36:16.839 "traddr": "127.0.0.1", 00:36:16.839 "adrfam": "ipv4", 00:36:16.839 "trsvcid": "4420", 00:36:16.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:16.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:16.839 "prchk_reftag": false, 00:36:16.839 "prchk_guard": false, 00:36:16.839 "hdgst": false, 00:36:16.839 "ddgst": false, 00:36:16.839 "psk": "key1", 00:36:16.839 "allow_unrecognized_csi": false, 00:36:16.839 "method": "bdev_nvme_attach_controller", 00:36:16.839 "req_id": 1 00:36:16.839 } 00:36:16.839 Got JSON-RPC error response 00:36:16.839 response: 00:36:16.839 { 00:36:16.839 "code": -5, 00:36:16.839 "message": "Input/output error" 00:36:16.839 } 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:16.839 14:56:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:16.839 14:56:23 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:16.839 14:56:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:16.839 14:56:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.839 14:56:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.839 14:56:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.839 14:56:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:17.099 14:56:24 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:17.099 14:56:24 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:17.099 14:56:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:17.099 14:56:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.099 14:56:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.099 14:56:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:17.099 14:56:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.358 14:56:24 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:17.358 14:56:24 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:17.358 14:56:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:17.358 14:56:24 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:17.358 14:56:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:17.617 14:56:24 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:17.617 14:56:24 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:17.617 14:56:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.877 14:56:24 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:17.877 14:56:24 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.D7hXTnT0FK 00:36:17.877 14:56:24 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.D7hXTnT0FK 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.D7hXTnT0FK 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.D7hXTnT0FK 00:36:17.877 14:56:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.D7hXTnT0FK 00:36:17.877 [2024-11-20 14:56:24.826432] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.D7hXTnT0FK': 0100660 00:36:17.877 [2024-11-20 14:56:24.826450] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:17.877 request: 00:36:17.877 { 00:36:17.877 "name": "key0", 00:36:17.877 "path": "/tmp/tmp.D7hXTnT0FK", 00:36:17.877 "method": "keyring_file_add_key", 00:36:17.877 "req_id": 1 00:36:17.877 } 00:36:17.877 Got JSON-RPC error response 00:36:17.877 response: 00:36:17.877 { 00:36:17.877 "code": -1, 00:36:17.877 "message": "Operation not permitted" 00:36:17.877 } 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:17.877 14:56:24 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:17.877 14:56:24 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.D7hXTnT0FK 00:36:17.877 14:56:24 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.D7hXTnT0FK 00:36:17.877 14:56:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.D7hXTnT0FK 00:36:18.136 14:56:24 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.D7hXTnT0FK 00:36:18.137 14:56:25 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:18.137 14:56:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:18.137 14:56:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.137 14:56:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.137 14:56:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.137 14:56:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.137 14:56:25 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:18.137 14:56:25 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.137 14:56:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:18.137 14:56:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.137 14:56:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:18.137 14:56:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:18.137 14:56:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:18.137 14:56:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:18.137 14:56:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.137 14:56:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.396 [2024-11-20 14:56:25.311671] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.D7hXTnT0FK': No such file or directory 00:36:18.396 [2024-11-20 14:56:25.311685] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:18.396 [2024-11-20 14:56:25.311698] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:18.396 [2024-11-20 14:56:25.311704] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:18.396 [2024-11-20 14:56:25.311709] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:18.396 [2024-11-20 14:56:25.311714] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:18.396 request: 00:36:18.396 { 00:36:18.396 "name": "nvme0", 00:36:18.396 "trtype": "tcp", 00:36:18.396 "traddr": "127.0.0.1", 00:36:18.396 "adrfam": "ipv4", 00:36:18.396 "trsvcid": "4420", 00:36:18.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:18.396 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:18.396 "prchk_reftag": false, 00:36:18.396 "prchk_guard": false, 00:36:18.396 "hdgst": false, 00:36:18.396 "ddgst": false, 00:36:18.396 "psk": "key0", 00:36:18.396 "allow_unrecognized_csi": false, 00:36:18.396 "method": "bdev_nvme_attach_controller", 00:36:18.396 "req_id": 1 00:36:18.396 } 00:36:18.396 Got JSON-RPC error response 00:36:18.396 response: 00:36:18.396 { 00:36:18.396 "code": -19, 00:36:18.396 "message": "No such device" 00:36:18.396 } 00:36:18.396 14:56:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:18.396 14:56:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:18.396 14:56:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:18.396 14:56:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:18.396 14:56:25 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:18.396 14:56:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:18.656 14:56:25 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.obgBpdanC1 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:18.656 14:56:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:18.656 14:56:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.656 14:56:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:18.656 14:56:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:18.656 14:56:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:18.656 14:56:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.obgBpdanC1 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.obgBpdanC1 00:36:18.656 14:56:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.obgBpdanC1 00:36:18.656 14:56:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.obgBpdanC1 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.obgBpdanC1 00:36:18.656 14:56:25 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.656 14:56:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.914 nvme0n1 00:36:18.914 14:56:25 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:18.914 14:56:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:18.914 14:56:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.914 14:56:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.914 14:56:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.914 14:56:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.173 14:56:26 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:19.173 14:56:26 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:19.173 14:56:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:19.173 14:56:26 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:19.173 14:56:26 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:19.173 14:56:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.173 14:56:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.173 14:56:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.432 14:56:26 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:19.432 14:56:26 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:19.432 14:56:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:19.432 14:56:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.432 14:56:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.432 14:56:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.432 14:56:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.690 14:56:26 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:19.690 14:56:26 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:19.690 14:56:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:19.690 14:56:26 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:19.690 14:56:26 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:19.690 14:56:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.949 14:56:26 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:19.949 14:56:26 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.obgBpdanC1 00:36:19.949 14:56:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.obgBpdanC1 00:36:20.207 14:56:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gvLXhKoVMM 00:36:20.207 14:56:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gvLXhKoVMM 00:36:20.207 14:56:27 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.207 14:56:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.465 nvme0n1 00:36:20.465 14:56:27 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:20.465 14:56:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:20.724 14:56:27 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:20.724 "subsystems": [ 00:36:20.724 { 00:36:20.724 "subsystem": "keyring", 00:36:20.724 "config": [ 00:36:20.724 { 00:36:20.724 "method": "keyring_file_add_key", 00:36:20.724 "params": { 00:36:20.724 "name": "key0", 00:36:20.724 "path": "/tmp/tmp.obgBpdanC1" 00:36:20.724 } 00:36:20.724 }, 00:36:20.724 { 00:36:20.724 "method": "keyring_file_add_key", 00:36:20.724 "params": { 00:36:20.724 "name": "key1", 00:36:20.724 "path": "/tmp/tmp.gvLXhKoVMM" 00:36:20.724 } 00:36:20.724 } 00:36:20.724 ] 00:36:20.724 }, 00:36:20.724 { 00:36:20.724 "subsystem": "iobuf", 00:36:20.724 "config": [ 00:36:20.724 { 00:36:20.724 "method": "iobuf_set_options", 00:36:20.724 "params": { 00:36:20.724 "small_pool_count": 8192, 00:36:20.724 "large_pool_count": 1024, 00:36:20.724 "small_bufsize": 8192, 00:36:20.724 "large_bufsize": 135168, 00:36:20.724 "enable_numa": false 00:36:20.724 } 00:36:20.724 } 00:36:20.724 ] 00:36:20.724 }, 00:36:20.724 { 00:36:20.724 "subsystem": "sock", 00:36:20.724 "config": [ 00:36:20.724 { 00:36:20.724 "method": "sock_set_default_impl", 00:36:20.724 "params": { 00:36:20.724 "impl_name": "posix" 00:36:20.724 } 00:36:20.724 }, 00:36:20.724 { 00:36:20.724 "method": "sock_impl_set_options", 00:36:20.724 "params": { 00:36:20.724 "impl_name": "ssl", 00:36:20.724 "recv_buf_size": 4096, 00:36:20.724 "send_buf_size": 4096, 00:36:20.724 "enable_recv_pipe": true, 00:36:20.724 "enable_quickack": false, 00:36:20.724 "enable_placement_id": 0, 00:36:20.724 "enable_zerocopy_send_server": true, 00:36:20.724 "enable_zerocopy_send_client": false, 00:36:20.724 "zerocopy_threshold": 0, 00:36:20.724 "tls_version": 0, 00:36:20.724 "enable_ktls": false 00:36:20.724 } 00:36:20.724 }, 00:36:20.724 { 00:36:20.724 "method": "sock_impl_set_options", 00:36:20.724 "params": { 00:36:20.724 "impl_name": "posix", 00:36:20.724 "recv_buf_size": 2097152, 00:36:20.724 "send_buf_size": 2097152, 00:36:20.724 "enable_recv_pipe": true, 00:36:20.724 "enable_quickack": false, 00:36:20.724 "enable_placement_id": 0, 00:36:20.724 "enable_zerocopy_send_server": true, 00:36:20.724 "enable_zerocopy_send_client": false, 00:36:20.724 "zerocopy_threshold": 0, 00:36:20.724 "tls_version": 0, 00:36:20.724 "enable_ktls": false 00:36:20.724 } 00:36:20.724 } 00:36:20.724 ] 00:36:20.724 }, 00:36:20.724 { 00:36:20.724 "subsystem": "vmd", 00:36:20.724 "config": [] 00:36:20.724 }, 00:36:20.724 { 00:36:20.724 "subsystem": "accel", 00:36:20.724 "config": [ 00:36:20.724 { 00:36:20.724 "method": "accel_set_options", 00:36:20.724 "params": { 00:36:20.724 "small_cache_size": 128, 00:36:20.724 "large_cache_size": 16, 00:36:20.724 "task_count": 2048, 00:36:20.724 "sequence_count": 2048, 00:36:20.724 "buf_count": 2048 00:36:20.724 } 00:36:20.724 } 00:36:20.724 ] 00:36:20.724 }, 00:36:20.724 { 00:36:20.724 "subsystem": "bdev", 00:36:20.724 "config": [ 00:36:20.724 { 00:36:20.725 "method": "bdev_set_options", 00:36:20.725 "params": { 00:36:20.725 "bdev_io_pool_size": 65535, 00:36:20.725 "bdev_io_cache_size": 256, 00:36:20.725 "bdev_auto_examine": true, 00:36:20.725 "iobuf_small_cache_size": 128, 00:36:20.725 "iobuf_large_cache_size": 16 00:36:20.725 } 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "method": "bdev_raid_set_options", 00:36:20.725 "params": { 00:36:20.725 "process_window_size_kb": 1024, 00:36:20.725 "process_max_bandwidth_mb_sec": 0 00:36:20.725 } 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "method": "bdev_iscsi_set_options", 00:36:20.725 "params": { 00:36:20.725 "timeout_sec": 30 00:36:20.725 } 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "method": "bdev_nvme_set_options", 00:36:20.725 "params": { 00:36:20.725 "action_on_timeout": "none", 00:36:20.725 "timeout_us": 0, 00:36:20.725 "timeout_admin_us": 0, 00:36:20.725 "keep_alive_timeout_ms": 10000, 00:36:20.725 "arbitration_burst": 0, 00:36:20.725 "low_priority_weight": 0, 00:36:20.725 "medium_priority_weight": 0, 00:36:20.725 "high_priority_weight": 0, 00:36:20.725 "nvme_adminq_poll_period_us": 10000, 00:36:20.725 "nvme_ioq_poll_period_us": 0, 00:36:20.725 "io_queue_requests": 512, 00:36:20.725 "delay_cmd_submit": true, 00:36:20.725 "transport_retry_count": 4, 00:36:20.725 "bdev_retry_count": 3, 00:36:20.725 "transport_ack_timeout": 0, 00:36:20.725 "ctrlr_loss_timeout_sec": 0, 00:36:20.725 "reconnect_delay_sec": 0, 00:36:20.725 "fast_io_fail_timeout_sec": 0, 00:36:20.725 "disable_auto_failback": false, 00:36:20.725 "generate_uuids": false, 00:36:20.725 "transport_tos": 0, 00:36:20.725 "nvme_error_stat": false, 00:36:20.725 "rdma_srq_size": 0, 00:36:20.725 "io_path_stat": false, 00:36:20.725 "allow_accel_sequence": false, 00:36:20.725 "rdma_max_cq_size": 0, 00:36:20.725 "rdma_cm_event_timeout_ms": 0, 00:36:20.725 "dhchap_digests": [ 00:36:20.725 "sha256", 00:36:20.725 "sha384", 00:36:20.725 "sha512" 00:36:20.725 ], 00:36:20.725 "dhchap_dhgroups": [ 00:36:20.725 "null", 00:36:20.725 "ffdhe2048", 00:36:20.725 "ffdhe3072", 00:36:20.725 "ffdhe4096", 00:36:20.725 "ffdhe6144", 00:36:20.725 "ffdhe8192" 00:36:20.725 ] 00:36:20.725 } 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "method": "bdev_nvme_attach_controller", 00:36:20.725 "params": { 00:36:20.725 "name": "nvme0", 00:36:20.725 "trtype": "TCP", 00:36:20.725 "adrfam": "IPv4", 00:36:20.725 "traddr": "127.0.0.1", 00:36:20.725 "trsvcid": "4420", 00:36:20.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:20.725 "prchk_reftag": false, 00:36:20.725 "prchk_guard": false, 00:36:20.725 "ctrlr_loss_timeout_sec": 0, 00:36:20.725 "reconnect_delay_sec": 0, 00:36:20.725 "fast_io_fail_timeout_sec": 0, 00:36:20.725 "psk": "key0", 00:36:20.725 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:20.725 "hdgst": false, 00:36:20.725 "ddgst": false, 00:36:20.725 "multipath": "multipath" 00:36:20.725 } 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "method": "bdev_nvme_set_hotplug", 00:36:20.725 "params": { 00:36:20.725 "period_us": 100000, 00:36:20.725 "enable": false 00:36:20.725 } 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "method": "bdev_wait_for_examine" 00:36:20.725 } 00:36:20.725 ] 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "subsystem": "nbd", 00:36:20.725 "config": [] 00:36:20.725 } 00:36:20.725 ] 00:36:20.725 }' 00:36:20.725 14:56:27 keyring_file -- keyring/file.sh@115 -- # killprocess 56865 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 56865 ']' 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@958 -- # kill -0 56865 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56865 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56865' 00:36:20.725 killing process with pid 56865 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@973 -- # kill 56865 00:36:20.725 Received shutdown signal, test time was about 1.000000 seconds 00:36:20.725 00:36:20.725 Latency(us) 00:36:20.725 [2024-11-20T13:56:27.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.725 [2024-11-20T13:56:27.785Z] =================================================================================================================== 00:36:20.725 [2024-11-20T13:56:27.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@978 -- # wait 56865 00:36:20.725 14:56:27 keyring_file -- keyring/file.sh@118 -- # bperfpid=58760 00:36:20.725 14:56:27 keyring_file -- keyring/file.sh@120 -- # waitforlisten 58760 /var/tmp/bperf.sock 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 58760 ']' 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:20.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:20.725 14:56:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:20.725 14:56:27 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:20.725 14:56:27 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:20.725 "subsystems": [ 00:36:20.725 { 00:36:20.725 "subsystem": "keyring", 00:36:20.725 "config": [ 00:36:20.725 { 00:36:20.725 "method": "keyring_file_add_key", 00:36:20.725 "params": { 00:36:20.725 "name": "key0", 00:36:20.725 "path": "/tmp/tmp.obgBpdanC1" 00:36:20.725 } 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "method": "keyring_file_add_key", 00:36:20.725 "params": { 00:36:20.725 "name": "key1", 00:36:20.725 "path": "/tmp/tmp.gvLXhKoVMM" 00:36:20.725 } 00:36:20.725 } 00:36:20.725 ] 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "subsystem": "iobuf", 00:36:20.725 "config": [ 00:36:20.725 { 00:36:20.725 "method": "iobuf_set_options", 00:36:20.725 "params": { 00:36:20.725 "small_pool_count": 8192, 00:36:20.725 "large_pool_count": 1024, 00:36:20.725 "small_bufsize": 8192, 00:36:20.725 "large_bufsize": 135168, 00:36:20.725 "enable_numa": false 00:36:20.725 } 00:36:20.725 } 00:36:20.725 ] 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "subsystem": "sock", 00:36:20.725 "config": [ 00:36:20.725 { 00:36:20.725 "method": "sock_set_default_impl", 00:36:20.725 "params": { 00:36:20.725 "impl_name": "posix" 00:36:20.725 } 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "method": "sock_impl_set_options", 00:36:20.725 "params": { 00:36:20.725 "impl_name": "ssl", 00:36:20.725 "recv_buf_size": 4096, 00:36:20.725 "send_buf_size": 4096, 00:36:20.725 "enable_recv_pipe": true, 00:36:20.725 "enable_quickack": false, 00:36:20.725 "enable_placement_id": 0, 00:36:20.725 "enable_zerocopy_send_server": true, 00:36:20.725 "enable_zerocopy_send_client": false, 00:36:20.725 "zerocopy_threshold": 0, 00:36:20.725 "tls_version": 0, 00:36:20.725 "enable_ktls": false 00:36:20.725 } 00:36:20.725 }, 00:36:20.725 { 00:36:20.725 "method": "sock_impl_set_options", 00:36:20.725 "params": { 00:36:20.725 "impl_name": "posix", 00:36:20.725 "recv_buf_size": 2097152, 00:36:20.725 "send_buf_size": 2097152, 00:36:20.726 "enable_recv_pipe": true, 00:36:20.726 "enable_quickack": false, 00:36:20.726 "enable_placement_id": 0, 00:36:20.726 "enable_zerocopy_send_server": true, 00:36:20.726 "enable_zerocopy_send_client": false, 00:36:20.726 "zerocopy_threshold": 0, 00:36:20.726 "tls_version": 0, 00:36:20.726 "enable_ktls": false 00:36:20.726 } 00:36:20.726 } 00:36:20.726 ] 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "subsystem": "vmd", 00:36:20.726 "config": [] 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "subsystem": "accel", 00:36:20.726 "config": [ 00:36:20.726 { 00:36:20.726 "method": "accel_set_options", 00:36:20.726 "params": { 00:36:20.726 "small_cache_size": 128, 00:36:20.726 "large_cache_size": 16, 00:36:20.726 "task_count": 2048, 00:36:20.726 "sequence_count": 2048, 00:36:20.726 "buf_count": 2048 00:36:20.726 } 00:36:20.726 } 00:36:20.726 ] 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "subsystem": "bdev", 00:36:20.726 "config": [ 00:36:20.726 { 00:36:20.726 "method": "bdev_set_options", 00:36:20.726 "params": { 00:36:20.726 "bdev_io_pool_size": 65535, 00:36:20.726 "bdev_io_cache_size": 256, 00:36:20.726 "bdev_auto_examine": true, 00:36:20.726 "iobuf_small_cache_size": 128, 00:36:20.726 "iobuf_large_cache_size": 16 00:36:20.726 } 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "method": "bdev_raid_set_options", 00:36:20.726 "params": { 00:36:20.726 "process_window_size_kb": 1024, 00:36:20.726 "process_max_bandwidth_mb_sec": 0 00:36:20.726 } 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "method": "bdev_iscsi_set_options", 00:36:20.726 "params": { 00:36:20.726 "timeout_sec": 30 00:36:20.726 } 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "method": "bdev_nvme_set_options", 00:36:20.726 "params": { 00:36:20.726 "action_on_timeout": "none", 00:36:20.726 "timeout_us": 0, 00:36:20.726 "timeout_admin_us": 0, 00:36:20.726 "keep_alive_timeout_ms": 10000, 00:36:20.726 "arbitration_burst": 0, 00:36:20.726 "low_priority_weight": 0, 00:36:20.726 "medium_priority_weight": 0, 00:36:20.726 "high_priority_weight": 0, 00:36:20.726 "nvme_adminq_poll_period_us": 10000, 00:36:20.726 "nvme_ioq_poll_period_us": 0, 00:36:20.726 "io_queue_requests": 512, 00:36:20.726 "delay_cmd_submit": true, 00:36:20.726 "transport_retry_count": 4, 00:36:20.726 "bdev_retry_count": 3, 00:36:20.726 "transport_ack_timeout": 0, 00:36:20.726 "ctrlr_loss_timeout_sec": 0, 00:36:20.726 "reconnect_delay_sec": 0, 00:36:20.726 "fast_io_fail_timeout_sec": 0, 00:36:20.726 "disable_auto_failback": false, 00:36:20.726 "generate_uuids": false, 00:36:20.726 "transport_tos": 0, 00:36:20.726 "nvme_error_stat": false, 00:36:20.726 "rdma_srq_size": 0, 00:36:20.726 "io_path_stat": false, 00:36:20.726 "allow_accel_sequence": false, 00:36:20.726 "rdma_max_cq_size": 0, 00:36:20.726 "rdma_cm_event_timeout_ms": 0, 00:36:20.726 "dhchap_digests": [ 00:36:20.726 "sha256", 00:36:20.726 "sha384", 00:36:20.726 "sha512" 00:36:20.726 ], 00:36:20.726 "dhchap_dhgroups": [ 00:36:20.726 "null", 00:36:20.726 "ffdhe2048", 00:36:20.726 "ffdhe3072", 00:36:20.726 "ffdhe4096", 00:36:20.726 "ffdhe6144", 00:36:20.726 "ffdhe8192" 00:36:20.726 ] 00:36:20.726 } 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "method": "bdev_nvme_attach_controller", 00:36:20.726 "params": { 00:36:20.726 "name": "nvme0", 00:36:20.726 "trtype": "TCP", 00:36:20.726 "adrfam": "IPv4", 00:36:20.726 "traddr": "127.0.0.1", 00:36:20.726 "trsvcid": "4420", 00:36:20.726 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:20.726 "prchk_reftag": false, 00:36:20.726 "prchk_guard": false, 00:36:20.726 "ctrlr_loss_timeout_sec": 0, 00:36:20.726 "reconnect_delay_sec": 0, 00:36:20.726 "fast_io_fail_timeout_sec": 0, 00:36:20.726 "psk": "key0", 00:36:20.726 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:20.726 "hdgst": false, 00:36:20.726 "ddgst": false, 00:36:20.726 "multipath": "multipath" 00:36:20.726 } 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "method": "bdev_nvme_set_hotplug", 00:36:20.726 "params": { 00:36:20.726 "period_us": 100000, 00:36:20.726 "enable": false 00:36:20.726 } 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "method": "bdev_wait_for_examine" 00:36:20.726 } 00:36:20.726 ] 00:36:20.726 }, 00:36:20.726 { 00:36:20.726 "subsystem": "nbd", 00:36:20.726 "config": [] 00:36:20.726 } 00:36:20.726 ] 00:36:20.726 }' 00:36:20.985 [2024-11-20 14:56:27.811404] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:36:20.985 [2024-11-20 14:56:27.811462] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58760 ] 00:36:20.985 [2024-11-20 14:56:27.874789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.985 [2024-11-20 14:56:27.904533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.243 [2024-11-20 14:56:28.048760] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:21.811 14:56:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.811 14:56:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:21.811 14:56:28 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:21.811 14:56:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:21.811 14:56:28 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:21.811 14:56:28 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:21.811 14:56:28 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:21.811 14:56:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:21.811 14:56:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:21.811 14:56:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.811 14:56:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:21.811 14:56:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.069 14:56:28 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:22.069 14:56:28 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:22.069 14:56:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.069 14:56:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:22.069 14:56:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.069 14:56:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.069 14:56:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.069 14:56:29 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:22.069 14:56:29 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:22.069 14:56:29 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:22.069 14:56:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:22.327 14:56:29 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:22.327 14:56:29 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:22.327 14:56:29 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.obgBpdanC1 /tmp/tmp.gvLXhKoVMM 00:36:22.327 14:56:29 keyring_file -- keyring/file.sh@20 -- # killprocess 58760 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 58760 ']' 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@958 -- # kill -0 58760 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58760 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58760' 00:36:22.327 killing process with pid 58760 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@973 -- # kill 58760 00:36:22.327 Received shutdown signal, test time was about 1.000000 seconds 00:36:22.327 00:36:22.327 Latency(us) 00:36:22.327 [2024-11-20T13:56:29.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.327 [2024-11-20T13:56:29.387Z] =================================================================================================================== 00:36:22.327 [2024-11-20T13:56:29.387Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:22.327 14:56:29 keyring_file -- common/autotest_common.sh@978 -- # wait 58760 00:36:22.585 14:56:29 keyring_file -- keyring/file.sh@21 -- # killprocess 56703 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 56703 ']' 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@958 -- # kill -0 56703 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56703 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56703' 00:36:22.585 killing process with pid 56703 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@973 -- # kill 56703 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@978 -- # wait 56703 00:36:22.585 00:36:22.585 real 0m11.002s 00:36:22.585 user 0m26.246s 00:36:22.585 sys 0m2.230s 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.585 14:56:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:22.585 ************************************ 00:36:22.585 END TEST keyring_file 00:36:22.585 ************************************ 00:36:22.845 14:56:29 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:22.845 14:56:29 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:22.845 14:56:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:22.845 14:56:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.845 14:56:29 -- common/autotest_common.sh@10 -- # set +x 00:36:22.845 ************************************ 00:36:22.845 START TEST keyring_linux 00:36:22.845 ************************************ 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:22.845 Joined session keyring: 606629027 00:36:22.845 * Looking for test storage... 00:36:22.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.845 14:56:29 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.845 --rc genhtml_branch_coverage=1 00:36:22.845 --rc genhtml_function_coverage=1 00:36:22.845 --rc genhtml_legend=1 00:36:22.845 --rc geninfo_all_blocks=1 00:36:22.845 --rc geninfo_unexecuted_blocks=1 00:36:22.845 00:36:22.845 ' 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.845 --rc genhtml_branch_coverage=1 00:36:22.845 --rc genhtml_function_coverage=1 00:36:22.845 --rc genhtml_legend=1 00:36:22.845 --rc geninfo_all_blocks=1 00:36:22.845 --rc geninfo_unexecuted_blocks=1 00:36:22.845 00:36:22.845 ' 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.845 --rc genhtml_branch_coverage=1 00:36:22.845 --rc genhtml_function_coverage=1 00:36:22.845 --rc genhtml_legend=1 00:36:22.845 --rc geninfo_all_blocks=1 00:36:22.845 --rc geninfo_unexecuted_blocks=1 00:36:22.845 00:36:22.845 ' 00:36:22.845 14:56:29 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.845 --rc genhtml_branch_coverage=1 00:36:22.845 --rc genhtml_function_coverage=1 00:36:22.845 --rc genhtml_legend=1 00:36:22.845 --rc geninfo_all_blocks=1 00:36:22.845 --rc geninfo_unexecuted_blocks=1 00:36:22.845 00:36:22.845 ' 00:36:22.845 14:56:29 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:22.845 14:56:29 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:22.845 14:56:29 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.846 14:56:29 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.846 14:56:29 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.846 14:56:29 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.846 14:56:29 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.846 14:56:29 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.846 14:56:29 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.846 14:56:29 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.846 14:56:29 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:22.846 14:56:29 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:22.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:22.846 /tmp/:spdk-test:key0 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:22.846 14:56:29 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:22.846 14:56:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:22.846 /tmp/:spdk-test:key1 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=59283 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 59283 00:36:22.846 14:56:29 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 59283 ']' 00:36:22.846 14:56:29 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.846 14:56:29 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:22.846 14:56:29 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.846 14:56:29 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:22.846 14:56:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:22.846 14:56:29 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:23.105 [2024-11-20 14:56:29.930001] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:36:23.105 [2024-11-20 14:56:29.930056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59283 ] 00:36:23.105 [2024-11-20 14:56:29.995139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.105 [2024-11-20 14:56:30.027334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:23.364 14:56:30 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:23.364 [2024-11-20 14:56:30.197204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.364 null0 00:36:23.364 [2024-11-20 14:56:30.229260] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:23.364 [2024-11-20 14:56:30.229631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.364 14:56:30 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:23.364 978532191 00:36:23.364 14:56:30 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:23.364 575548365 00:36:23.364 14:56:30 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=59288 00:36:23.364 14:56:30 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 59288 /var/tmp/bperf.sock 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 59288 ']' 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:23.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:23.364 14:56:30 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:23.364 [2024-11-20 14:56:30.288262] Starting SPDK v25.01-pre git sha1 a361eb5e2 / DPDK 24.03.0 initialization... 00:36:23.364 [2024-11-20 14:56:30.288309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:36:23.364 [2024-11-20 14:56:30.352568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.364 [2024-11-20 14:56:30.382446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.364 14:56:30 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:23.364 14:56:30 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:23.364 14:56:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:23.622 14:56:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:23.622 14:56:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:23.879 14:56:30 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:23.880 14:56:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:23.880 [2024-11-20 14:56:30.914796] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:24.139 nvme0n1 00:36:24.139 14:56:30 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:24.139 14:56:30 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:24.139 14:56:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:24.139 14:56:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:24.139 14:56:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.139 14:56:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:24.139 14:56:31 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:24.139 14:56:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:24.139 14:56:31 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:24.139 14:56:31 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:24.139 14:56:31 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:24.139 14:56:31 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:24.139 14:56:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.398 14:56:31 keyring_linux -- keyring/linux.sh@25 -- # sn=978532191 00:36:24.398 14:56:31 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:24.398 14:56:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:24.398 14:56:31 keyring_linux -- keyring/linux.sh@26 -- # [[ 978532191 == \9\7\8\5\3\2\1\9\1 ]] 00:36:24.398 14:56:31 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 978532191 00:36:24.398 14:56:31 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:24.398 14:56:31 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:24.398 Running I/O for 1 seconds... 00:36:25.779 24433.00 IOPS, 95.44 MiB/s 00:36:25.779 Latency(us) 00:36:25.779 [2024-11-20T13:56:32.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.779 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:25.779 nvme0n1 : 1.01 24433.29 95.44 0.00 0.00 5223.30 4369.07 10048.85 00:36:25.779 [2024-11-20T13:56:32.839Z] =================================================================================================================== 00:36:25.779 [2024-11-20T13:56:32.839Z] Total : 24433.29 95.44 0.00 0.00 5223.30 4369.07 10048.85 00:36:25.779 { 00:36:25.779 "results": [ 00:36:25.779 { 00:36:25.779 "job": "nvme0n1", 00:36:25.779 "core_mask": "0x2", 00:36:25.779 "workload": "randread", 00:36:25.779 "status": "finished", 00:36:25.779 "queue_depth": 128, 00:36:25.779 "io_size": 4096, 00:36:25.779 "runtime": 1.005227, 00:36:25.779 "iops": 24433.287207765014, 00:36:25.779 "mibps": 95.44252815533208, 00:36:25.779 "io_failed": 0, 00:36:25.779 "io_timeout": 0, 00:36:25.779 "avg_latency_us": 5223.295268650842, 00:36:25.779 "min_latency_us": 4369.066666666667, 00:36:25.779 "max_latency_us": 10048.853333333333 00:36:25.779 } 00:36:25.779 ], 00:36:25.779 "core_count": 1 00:36:25.779 } 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:25.779 14:56:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:25.779 14:56:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:25.779 14:56:32 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:25.779 14:56:32 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:25.779 14:56:32 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:25.779 14:56:32 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:25.779 14:56:32 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:25.779 14:56:32 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:25.779 14:56:32 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:25.779 14:56:32 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:25.779 14:56:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:26.039 [2024-11-20 14:56:32.916342] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:26.039 [2024-11-20 14:56:32.917019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb9760 (107): Transport endpoint is not connected 00:36:26.039 [2024-11-20 14:56:32.918015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb9760 (9): Bad file descriptor 00:36:26.039 [2024-11-20 14:56:32.919018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:26.039 [2024-11-20 14:56:32.919025] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:26.039 [2024-11-20 14:56:32.919031] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:26.039 [2024-11-20 14:56:32.919037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:26.039 request: 00:36:26.039 { 00:36:26.039 "name": "nvme0", 00:36:26.039 "trtype": "tcp", 00:36:26.039 "traddr": "127.0.0.1", 00:36:26.039 "adrfam": "ipv4", 00:36:26.039 "trsvcid": "4420", 00:36:26.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:26.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:26.039 "prchk_reftag": false, 00:36:26.039 "prchk_guard": false, 00:36:26.039 "hdgst": false, 00:36:26.039 "ddgst": false, 00:36:26.039 "psk": ":spdk-test:key1", 00:36:26.039 "allow_unrecognized_csi": false, 00:36:26.039 "method": "bdev_nvme_attach_controller", 00:36:26.039 "req_id": 1 00:36:26.039 } 00:36:26.039 Got JSON-RPC error response 00:36:26.039 response: 00:36:26.039 { 00:36:26.039 "code": -5, 00:36:26.039 "message": "Input/output error" 00:36:26.039 } 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@33 -- # sn=978532191 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 978532191 00:36:26.039 1 links removed 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@33 -- # sn=575548365 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 575548365 00:36:26.039 1 links removed 00:36:26.039 14:56:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 59288 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 59288 ']' 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 59288 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59288 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59288' 00:36:26.039 killing process with pid 59288 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 59288 00:36:26.039 Received shutdown signal, test time was about 1.000000 seconds 00:36:26.039 00:36:26.039 Latency(us) 00:36:26.039 [2024-11-20T13:56:33.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.039 [2024-11-20T13:56:33.099Z] =================================================================================================================== 00:36:26.039 [2024-11-20T13:56:33.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:26.039 14:56:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 59288 00:36:26.039 14:56:33 keyring_linux -- keyring/linux.sh@42 -- # killprocess 59283 00:36:26.039 14:56:33 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 59283 ']' 00:36:26.039 14:56:33 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 59283 00:36:26.039 14:56:33 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:26.039 14:56:33 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:26.039 14:56:33 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59283 00:36:26.298 14:56:33 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:26.298 14:56:33 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:26.298 14:56:33 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59283' 00:36:26.298 killing process with pid 59283 00:36:26.298 14:56:33 keyring_linux -- common/autotest_common.sh@973 -- # kill 59283 00:36:26.298 14:56:33 keyring_linux -- common/autotest_common.sh@978 -- # wait 59283 00:36:26.298 00:36:26.298 real 0m3.640s 00:36:26.298 user 0m6.866s 00:36:26.298 sys 0m1.201s 00:36:26.298 14:56:33 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.298 14:56:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:26.298 ************************************ 00:36:26.298 END TEST keyring_linux 00:36:26.298 ************************************ 00:36:26.298 14:56:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:26.298 14:56:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:26.298 14:56:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:26.298 14:56:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:26.298 14:56:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:26.298 14:56:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:26.298 14:56:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:26.298 14:56:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:26.298 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:36:26.298 14:56:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:26.298 14:56:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:26.298 14:56:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:26.298 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:36:31.573 INFO: APP EXITING 00:36:31.573 INFO: killing all VMs 00:36:31.573 INFO: killing vhost app 00:36:31.573 INFO: EXIT DONE 00:36:34.108 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:65:00.0 (144d a80a): Already using the nvme driver 00:36:34.108 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:36:34.108 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:36:36.646 Cleaning 00:36:36.646 Removing: /var/run/dpdk/spdk0/config 00:36:36.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:36.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:36.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:36.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:36.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:36.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:36.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:36.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:36.646 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:36.646 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:36.646 Removing: /var/run/dpdk/spdk1/config 00:36:36.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:36.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:36.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:36.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:36.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:36.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:36.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:36.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:36.646 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:36.646 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:36.646 Removing: /var/run/dpdk/spdk2/config 00:36:36.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:36.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:36.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:36.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:36.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:36.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:36.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:36.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:36.646 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:36.646 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:36.646 Removing: /var/run/dpdk/spdk3/config 00:36:36.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:36.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:36.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:36.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:36.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:36.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:36.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:36.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:36.646 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:36.646 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:36.646 Removing: /var/run/dpdk/spdk4/config 00:36:36.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:36.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:36.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:36.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:36.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:36.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:36.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:36.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:36.646 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:36.646 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:36.646 Removing: /dev/shm/bdev_svc_trace.1 00:36:36.646 Removing: /dev/shm/nvmf_trace.0 00:36:36.646 Removing: /dev/shm/spdk_tgt_trace.pid3656803 00:36:36.646 Removing: /var/run/dpdk/spdk0 00:36:36.646 Removing: /var/run/dpdk/spdk1 00:36:36.646 Removing: /var/run/dpdk/spdk2 00:36:36.646 Removing: /var/run/dpdk/spdk3 00:36:36.646 Removing: /var/run/dpdk/spdk4 00:36:36.646 Removing: /var/run/dpdk/spdk_pid10845 00:36:36.646 Removing: /var/run/dpdk/spdk_pid19170 00:36:36.646 Removing: /var/run/dpdk/spdk_pid19175 00:36:36.646 Removing: /var/run/dpdk/spdk_pid2391 00:36:36.646 Removing: /var/run/dpdk/spdk_pid25275 00:36:36.646 Removing: /var/run/dpdk/spdk_pid27945 00:36:36.646 Removing: /var/run/dpdk/spdk_pid30519 00:36:36.646 Removing: /var/run/dpdk/spdk_pid32025 00:36:36.646 Removing: /var/run/dpdk/spdk_pid34761 00:36:36.646 Removing: /var/run/dpdk/spdk_pid36283 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3655112 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3656803 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3657494 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3658691 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3658766 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3660142 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3660174 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3660647 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3661738 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3662701 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3663092 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3663488 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3663835 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3664100 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3664333 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3664686 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3665068 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3665466 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3669041 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3669393 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3669433 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3669547 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3670117 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3670133 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3670508 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3670511 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3670875 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3670882 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3671240 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3671249 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3671695 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3672043 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3672441 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3676965 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3682498 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3695407 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3696408 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3701814 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3702170 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3707574 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3714886 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3718184 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3731497 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3743080 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3745228 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3746565 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3768215 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3773034 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3832463 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3839179 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3846683 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3855332 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3855424 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3856463 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3857552 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3858790 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3859461 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3859467 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3859804 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3859999 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3860136 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3861142 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3862167 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3863471 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3864163 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3864297 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3864610 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3865909 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3866984 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3877622 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3911637 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3917066 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3919369 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3921715 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3922049 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3922065 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3922306 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3922784 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3925123 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3926036 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3926562 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3929267 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3929969 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3930674 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3935900 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3942796 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3942797 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3942798 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3947791 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3958996 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3964291 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3971969 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3973770 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3975599 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3977139 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3983152 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3988608 00:36:36.646 Removing: /var/run/dpdk/spdk_pid3993665 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4003116 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4003119 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4008475 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4008805 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4008946 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4009514 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4009539 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4015223 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4016053 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4021621 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4025747 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4032326 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4039371 00:36:36.646 Removing: /var/run/dpdk/spdk_pid4049910 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4059029 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4059032 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4081269 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4082063 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4082737 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4083861 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4084596 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4085271 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4085946 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4086624 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4091677 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4092013 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4099710 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4100086 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4106889 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4112257 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4124875 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4125622 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4131012 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4131436 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4136658 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4143839 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4147650 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4160458 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4171808 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4174017 00:36:36.916 Removing: /var/run/dpdk/spdk_pid4175141 00:36:36.916 Removing: /var/run/dpdk/spdk_pid46545 00:36:36.916 Removing: /var/run/dpdk/spdk_pid47213 00:36:36.916 Removing: /var/run/dpdk/spdk_pid47875 00:36:36.916 Removing: /var/run/dpdk/spdk_pid50827 00:36:36.916 Removing: /var/run/dpdk/spdk_pid51445 00:36:36.916 Removing: /var/run/dpdk/spdk_pid52083 00:36:36.916 Removing: /var/run/dpdk/spdk_pid56703 00:36:36.916 Removing: /var/run/dpdk/spdk_pid56865 00:36:36.916 Removing: /var/run/dpdk/spdk_pid58760 00:36:36.916 Removing: /var/run/dpdk/spdk_pid59283 00:36:36.916 Removing: /var/run/dpdk/spdk_pid59288 00:36:36.916 Removing: /var/run/dpdk/spdk_pid7339 00:36:36.916 Clean 00:36:36.916 14:56:43 -- common/autotest_common.sh@1453 -- # return 0 00:36:36.916 14:56:43 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:36.916 14:56:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:36.916 14:56:43 -- common/autotest_common.sh@10 -- # set +x 00:36:36.916 14:56:43 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:36.916 14:56:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:36.916 14:56:43 -- common/autotest_common.sh@10 -- # set +x 00:36:36.916 14:56:43 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:36.917 14:56:43 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:36.917 14:56:43 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:36.917 14:56:43 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:36.917 14:56:43 -- spdk/autotest.sh@398 -- # hostname 00:36:36.917 14:56:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:37.309 geninfo: WARNING: invalid characters removed from testname! 00:36:55.402 14:57:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:57.309 14:57:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:58.689 14:57:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:00.597 14:57:07 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:01.977 14:57:08 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:03.888 14:57:10 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:05.266 14:57:12 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:05.266 14:57:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:05.266 14:57:12 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:05.266 14:57:12 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:05.266 14:57:12 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:05.266 14:57:12 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:05.266 + [[ -n 3575487 ]] 00:37:05.266 + sudo kill 3575487 00:37:05.276 [Pipeline] } 00:37:05.291 [Pipeline] // stage 00:37:05.297 [Pipeline] } 00:37:05.311 [Pipeline] // timeout 00:37:05.316 [Pipeline] } 00:37:05.330 [Pipeline] // catchError 00:37:05.335 [Pipeline] } 00:37:05.350 [Pipeline] // wrap 00:37:05.356 [Pipeline] } 00:37:05.370 [Pipeline] // catchError 00:37:05.380 [Pipeline] stage 00:37:05.382 [Pipeline] { (Epilogue) 00:37:05.396 [Pipeline] catchError 00:37:05.397 [Pipeline] { 00:37:05.411 [Pipeline] echo 00:37:05.413 Cleanup processes 00:37:05.419 [Pipeline] sh 00:37:05.795 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:05.795 71922 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:05.809 [Pipeline] sh 00:37:06.093 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:06.093 ++ grep -v 'sudo pgrep' 00:37:06.093 ++ awk '{print $1}' 00:37:06.093 + sudo kill -9 00:37:06.093 + true 00:37:06.106 [Pipeline] sh 00:37:06.389 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:16.419 [Pipeline] sh 00:37:16.704 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:16.704 Artifacts sizes are good 00:37:16.721 [Pipeline] archiveArtifacts 00:37:16.731 Archiving artifacts 00:37:16.888 [Pipeline] sh 00:37:17.173 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:17.189 [Pipeline] cleanWs 00:37:17.200 [WS-CLEANUP] Deleting project workspace... 00:37:17.200 [WS-CLEANUP] Deferred wipeout is used... 00:37:17.208 [WS-CLEANUP] done 00:37:17.210 [Pipeline] } 00:37:17.228 [Pipeline] // catchError 00:37:17.240 [Pipeline] sh 00:37:17.523 + logger -p user.info -t JENKINS-CI 00:37:17.533 [Pipeline] } 00:37:17.546 [Pipeline] // stage 00:37:17.552 [Pipeline] } 00:37:17.566 [Pipeline] // node 00:37:17.571 [Pipeline] End of Pipeline 00:37:17.609 Finished: SUCCESS